Black lives matter.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
A scheduling Policy can be used to specify the predicates and priorities that the kube-schedulerControl plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. runs to filter and score nodes, respectively.
You can set a scheduling policy by running
kube-scheduler --policy-config-file <filename>
or
kube-scheduler --policy-configmap <ConfigMap>
and using the Policy type.
The following predicates implement filtering:
PodFitsHostPorts
: Checks if a Node has free ports (the network protocol kind)
for the Pod ports the Pod is requesting.
PodFitsHost
: Checks if a Pod specifies a specific Node by its hostname.
PodFitsResources
: Checks if the Node has free resources (eg, CPU and Memory)
to meet the requirement of the Pod.
PodMatchNodeSelector
: Checks if a Pod's Node SelectorAllows users to filter a list of resources based on labels.
matches the Node's label(s)Tags objects with identifying attributes that are meaningful and relevant to users.
.
NoVolumeZoneConflict
: Evaluate if the VolumesA directory containing data, accessible to the containers in a pod.
that a Pod requests are available on the Node, given the failure zone restrictions for
that storage.
NoDiskConflict
: Evaluates if a Pod can fit on a Node due to the volumes it requests,
and those that are already mounted.
MaxCSIVolumeCount
: Decides how many CSIThe Container Storage Interface (CSI) defines a standard interface to expose storage systems to containers.
volumes should be attached, and whether that's over a configured limit.
CheckNodeMemoryPressure
: If a Node is reporting memory pressure, and there's no
configured exception, the Pod won't be scheduled there.
CheckNodePIDPressure
: If a Node is reporting that process IDs are scarce, and
there's no configured exception, the Pod won't be scheduled there.
CheckNodeDiskPressure
: If a Node is reporting storage pressure (a filesystem that
is full or nearly full), and there's no configured exception, the Pod won't be
scheduled there.
CheckNodeCondition
: Nodes can report that they have a completely full filesystem,
that networking isn't available or that kubelet is otherwise not ready to run Pods.
If such a condition is set for a Node, and there's no configured exception, the Pod
won't be scheduled there.
PodToleratesNodeTaints
: checks if a Pod's tolerationsA core object consisting of three required properties: key, value, and effect. Tolerations enable the scheduling of pods on nodes or node groups that have a matching taint.
can tolerate the Node's taintsA core object consisting of three required properties: key, value, and effect. Taints prevent the scheduling of pods on nodes or node groups.
.
CheckVolumeBinding
: Evaluates if a Pod can fit due to the volumes it requests.
This applies for both bound and unbound
PVCsClaims storage resources defined in a PersistentVolume so that it can be mounted as a volume in a container.
.
The following priorities implement scoring:
SelectorSpreadPriority
: Spreads Pods across hosts, considering Pods that
belong to the same ServiceA way to expose an application running on a set of Pods as a network service.
,
StatefulSetManages deployment and scaling of a set of Pods, with durable storage and persistent identifiers for each Pod.
or
ReplicaSetReplicaSet ensures that a specified number of Pod replicas are running at one time
.
InterPodAffinityPriority
: Implements preferred
inter pod affininity and antiaffinity.
LeastRequestedPriority
: Favors nodes with fewer requested resources. In other
words, the more Pods that are placed on a Node, and the more resources those
Pods use, the lower the ranking this policy will give.
MostRequestedPriority
: Favors nodes with most requested resources. This policy
will fit the scheduled Pods onto the smallest number of Nodes needed to run your
overall set of workloads.
RequestedToCapacityRatioPriority
: Creates a requestedToCapacity based ResourceAllocationPriority using default resource scoring function shape.
BalancedResourceAllocation
: Favors nodes with balanced resource usage.
NodePreferAvoidPodsPriority
: Prioritizes nodes according to the node annotation
scheduler.alpha.kubernetes.io/preferAvoidPods
. You can use this to hint that
two different Pods shouldn't run on the same Node.
NodeAffinityPriority
: Prioritizes nodes according to node affinity scheduling
preferences indicated in PreferredDuringSchedulingIgnoredDuringExecution.
You can read more about this in Assigning Pods to Nodes.
TaintTolerationPriority
: Prepares the priority list for all the nodes, based on
the number of intolerable taints on the node. This policy adjusts a node's rank
taking that list into account.
ImageLocalityPriority
: Favors nodes that already have the
container imagesStored instance of a container that holds a set of software needed to run an application.
for that
Pod cached locally.
ServiceSpreadingPriority
: For a given Service, this policy aims to make sure that
the Pods for the Service run on different nodes. It favours scheduling onto nodes
that don't have Pods for the service already assigned there. The overall outcome is
that the Service becomes more resilient to a single Node failure.
EqualPriority
: Gives an equal weight of one to all nodes.
EvenPodsSpreadPriority
: Implements preferred
pod topology spread constraints.