Black lives matter.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node contains the services necessary to run PodsA Pod represents a set of running containers in your cluster. , managed by the control planeThe container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. .
Typically you have several nodes in a cluster; in a learning or resource-limited environment, you might have just one.
The components on a node include the kubeletAn agent that runs on each node in the cluster. It makes sure that containers are running in a pod. , a container runtimeThe container runtime is the software that is responsible for running containers. , and the kube-proxykube-proxy is a network proxy that runs on each node in the cluster. .
There are two main ways to have Nodes added to the API serverControl plane component that serves the Kubernetes API. :
After you create a Node object, or the kubelet on a node self-registers, the control plane checks whether the new Node object is valid. For example, if you try to create a Node from the following JSON manifest:
{
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "10.240.79.157",
"labels": {
"name": "my-first-k8s-node"
}
}
}
Kubernetes creates a Node object internally (the representation). Kubernetes checks
that a kubelet has registered to the API server that matches the metadata.name
field of the Node. If the node is healthy (if all necessary services are running),
it is eligible to run a Pod. Otherwise, that node is ignored for any cluster activity
until it becomes healthy.
Note: Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy. You, or a controllerA control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. , must explicitly delete the Node object to stop that health checking.
The name of a Node object must be a valid DNS subdomain name.
When the kubelet flag --register-node
is true (the default), the kubelet will attempt to
register itself with the API server. This is the preferred pattern, used by most distros.
For self-registration, the kubelet is started with the following options:
--kubeconfig
- Path to credentials to authenticate itself to the API server.
--cloud-provider
- How to talk to a cloud providerAn organization that offers a cloud computing platform.
to read metadata about itself.
--register-node
- Automatically register with the API server.
--register-with-taints
- Register the node with the given list of taintsA core object consisting of three required properties: key, value, and effect. Taints prevent the scheduling of pods on nodes or node groups.
(comma separated <key>=<value>:<effect>
).
No-op if register-node
is false.
--node-ip
- IP address of the node.
--node-labels
- LabelsTags objects with identifying attributes that are meaningful and relevant to users.
to add when registering the node in the cluster (see label restrictions enforced by the NodeRestriction admission plugin).
--node-status-update-frequency
- Specifies how often kubelet posts node status to master.
When the Node authorization mode and NodeRestriction admission plugin are enabled, kubelets are only authorized to create/modify their own Node resource.
You can create and modify Node objects using kubectlA command line tool for communicating with a Kubernetes API server. .
When you want to create Node objects manually, set the kubelet flag --register-node=false
.
You can modify Node objects regardless of the setting of --register-node
.
For example, you can set labels on an existing Node, or mark it unschedulable.
You can use labels on Nodes in conjunction with node selectors on Pods to control scheduling. For example, you can to constrain a Pod to only be eligible to run on a subset of the available nodes.
Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node, but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance.
To mark a Node unschedulable, run:
kubectl cordon $NODENAME
Note: Pods that are part of a DaemonSetEnsures a copy of a Pod is running across a set of nodes in a cluster. tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications.
A Node's status contains the following information:
You can use kubectl
to view a Node's status and other details:
kubectl describe node <insert-node-name-here>
Each section of the output is described below.
The usage of these fields varies depending on your cloud provider or bare metal configuration.
--hostname-override
parameter.The conditions
field describes the status of all Running
nodes. Examples of conditions include:
Node Condition | Description |
---|---|
Ready |
True if the node is healthy and ready to accept pods, False if the node is not healthy and is not accepting pods, and Unknown if the node controller has not heard from the node in the last node-monitor-grace-period (default is 40 seconds) |
DiskPressure |
True if pressure exists on the disk size--that is, if the disk capacity is low; otherwise False |
MemoryPressure |
True if pressure exists on the node memory--that is, if the node memory is low; otherwise False |
PIDPressure |
True if pressure exists on the processes—that is, if there are too many processes on the node; otherwise False |
NetworkUnavailable |
True if the network for the node is not correctly configured, otherwise False |
Note: If you use command-line tools to print details of a cordoned Node, the Condition includesSchedulingDisabled
.SchedulingDisabled
is not a Condition in the Kubernetes API; instead, cordoned nodes are marked Unschedulable in their spec.
The node condition is represented as a JSON object. For example, the following structure describes a healthy node:
"conditions": [
{
"type": "Ready",
"status": "True",
"reason": "KubeletReady",
"message": "kubelet is posting ready status",
"lastHeartbeatTime": "2019-06-05T18:38:35Z",
"lastTransitionTime": "2019-06-05T11:41:27Z"
}
]
If the Status of the Ready condition remains Unknown
or False
for longer than the pod-eviction-timeout
(an argument passed to the kube-controller-managerControl Plane component that runs controller processes.
), all the Pods on the node are scheduled for deletion by the node controller. The default eviction timeout duration is five minutes. In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.
The node controller does not force delete pods until it is confirmed that they have stopped
running in the cluster. You can see the pods that might be running on an unreachable node as
being in the Terminating
or Unknown
state. In cases where Kubernetes cannot deduce from the
underlying infrastructure if a node has permanently left a cluster, the cluster administrator
may need to delete the node object by hand. Deleting the node object from Kubernetes causes
all the Pod objects running on the node to be deleted from the API server, and frees up their
names.
The node lifecycle controller automatically creates taints that represent conditions. The scheduler takes the Node's taints into consideration when assigning a Pod to a Node. Pods can also have tolerations which let them tolerate a Node's taints.
See Taint Nodes by Condition for more details.
Describes the resources available on the node: CPU, memory and the maximum number of pods that can be scheduled onto the node.
The fields in the capacity block indicate the total amount of resources that a Node has. The allocatable block indicates the amount of resources on a Node that is available to be consumed by normal Pods.
You may read more about capacity and allocatable resources while learning how to reserve compute resources on a Node.
Describes general information about the node, such as kernel version, Kubernetes version (kubelet and kube-proxy version), Docker version (if used), and OS name. This information is gathered by Kubelet from the node.
The node controllerA control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. is a Kubernetes control plane component that manages various aspects of nodes.
The node controller has multiple roles in a node's life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).
The second is keeping the node controller's internal list of nodes up to date with the cloud provider's list of available machines. When running in a cloud environment, whenever a node is unhealthy, the node controller asks the cloud provider if the VM for that node is still available. If not, the node controller deletes the node from its list of nodes.
The third is monitoring the nodes' health. The node controller is
responsible for updating the NodeReady condition of NodeStatus to
ConditionUnknown when a node becomes unreachable (i.e. the node controller stops
receiving heartbeats for some reason, for example due to the node being down), and then later evicting
all the pods from the node (using graceful termination) if the node continues
to be unreachable. (The default timeouts are 40s to start reporting
ConditionUnknown and 5m after that to start evicting pods.) The node controller
checks the state of each node every --node-monitor-period
seconds.
Heartbeats, sent by Kubernetes nodes, help determine the availability of a node.
There are two forms of heartbeats: updates of NodeStatus
and the
Lease object.
Each Node has an associated Lease object in the kube-node-lease
namespaceAn abstraction used by Kubernetes to support multiple virtual clusters on the same physical cluster.
.
Lease is a lightweight resource, which improves the performance
of the node heartbeats as the cluster scales.
The kubelet is responsible for creating and updating the NodeStatus
and
a Lease object.
NodeStatus
either when there is change in status,
or if there has been no update for a configured interval. The default interval
for NodeStatus
updates is 5 minutes (much longer than the 40 second default
timeout for unreachable nodes).NodeStatus
updates. If the Lease update fails, the kubelet retries with exponential backoff starting at 200 milliseconds and capped at 7 seconds.In most cases, node controller limits the eviction rate to
--node-eviction-rate
(default 0.1) per second, meaning it won't evict pods
from more than 1 node per 10 seconds.
The node eviction behavior changes when a node in a given availability zone
becomes unhealthy. The node controller checks what percentage of nodes in the zone
are unhealthy (NodeReady condition is ConditionUnknown or ConditionFalse) at
the same time. If the fraction of unhealthy nodes is at least
--unhealthy-zone-threshold
(default 0.55) then the eviction rate is reduced:
if the cluster is small (i.e. has less than or equal to
--large-cluster-size-threshold
nodes - default 50) then evictions are
stopped, otherwise the eviction rate is reduced to
--secondary-node-eviction-rate
(default 0.01) per second. The reason these
policies are implemented per availability zone is because one availability zone
might become partitioned from the master while the others remain connected. If
your cluster does not span multiple cloud provider availability zones, then
there is only one availability zone (the whole cluster).
A key reason for spreading your nodes across availability zones is so that the
workload can be shifted to healthy zones when one entire zone goes down.
Therefore, if all nodes in a zone are unhealthy then the node controller evicts at
the normal rate of --node-eviction-rate
. The corner case is when all zones are
completely unhealthy (i.e. there are no healthy nodes in the cluster). In such a
case, the node controller assumes that there's some problem with master
connectivity and stops all evictions until some connectivity is restored.
The node controller is also responsible for evicting pods running on nodes with
NoExecute
taints, unless those pods tolerate that taint.
The node controller also adds taintsA core object consisting of three required properties: key, value, and effect. Taints prevent the scheduling of pods on nodes or node groups.
corresponding to node problems like node unreachable or not ready. This means
that the scheduler won't place Pods onto unhealthy nodes.
Caution:kubectl cordon
marks a node as 'unschedulable', which has the side effect of the service controller removing the node from any LoadBalancer node target lists it was previously eligible for, effectively removing incoming load balancer traffic from the cordoned node(s).
Node objects track information about the Node's resource capacity (for example: the amount of memory available, and the number of CPUs). Nodes that self register report their capacity during registration. If you manually add a Node, then you need to set the node's capacity information when you add it.
The Kubernetes schedulerControl plane component that watches for newly created pods with no assigned node, and selects a node for them to run on. ensures that there are enough resources for all the Pods on a Node. The scheduler checks that the sum of the requests of containers on the node is no greater than the node's capacity. That sum of requests includes all containers managed by the kubelet, but excludes any containers started directly by the container runtime, and also excludes any processes running outside of the kubelet's control.
Note: If you want to explicitly reserve resources for non-Pod processes, see reserve resources for system daemons.
Kubernetes v1.16 [alpha]
If you have enabled the TopologyManager
feature gate, then
the kubelet can use topology hints when making resource assignment decisions.
See Control Topology Management Policies on a Node
for more information.