Black lives matter.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
This page shows how to perform a rolling update on a DaemonSet.
DaemonSet has two update strategy types:
OnDelete
update strategy, after you update a DaemonSet template, new
DaemonSet pods will only be created when you manually delete old DaemonSet
pods. This is the same behavior of DaemonSet in Kubernetes version 1.5 or
before.RollingUpdate
update strategy, after you update a
DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods
will be created automatically, in a controlled fashion. At most one pod of the DaemonSet will be running on each node during the whole update process.To enable the rolling update feature of a DaemonSet, you must set its
.spec.updateStrategy.type
to RollingUpdate
.
You may want to set .spec.updateStrategy.rollingUpdate.maxUnavailable
(default
to 1) and .spec.minReadySeconds
(default to 0) as well.
RollingUpdate
update strategyThis YAML file specifies a DaemonSet with an update strategy as 'RollingUpdate'
controllers/fluentd-daemonset.yaml
|
---|
|
After verifying the update strategy of the DaemonSet manifest, create the DaemonSet:
kubectl create -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
Alternatively, use kubectl apply
to create the same DaemonSet if you plan to
update the DaemonSet with kubectl apply
.
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml
RollingUpdate
update strategyCheck the update strategy of your DaemonSet, and make sure it's set to
RollingUpdate
:
kubectl get ds/fluentd-elasticsearch -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}' -n kube-system
If you haven't created the DaemonSet in the system, check your DaemonSet manifest with the following command instead:
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset.yaml --dry-run=client -o go-template='{{.spec.updateStrategy.type}}{{"\n"}}'
The output from both commands should be:
RollingUpdate
If the output isn't RollingUpdate
, go back and modify the DaemonSet object or
manifest accordingly.
Any updates to a RollingUpdate
DaemonSet .spec.template
will trigger a rolling
update. Let's update the DaemonSet by applying a new YAML file. This can be done with several different kubectl
commands.
controllers/fluentd-daemonset-update.yaml
|
---|
|
If you update DaemonSets using
configuration files,
use kubectl apply
:
kubectl apply -f https://k8s.io/examples/controllers/fluentd-daemonset-update.yaml
If you update DaemonSets using
imperative commands,
use kubectl edit
:
kubectl edit ds/fluentd-elasticsearch -n kube-system
If you just need to update the container image in the DaemonSet template, i.e.
.spec.template.spec.containers[*].image
, use kubectl set image
:
kubectl set image ds/fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.6.0 -n kube-system
Finally, watch the rollout status of the latest DaemonSet rolling update:
kubectl rollout status ds/fluentd-elasticsearch -n kube-system
When the rollout is complete, the output is similar to this:
daemonset "fluentd-elasticsearch" successfully rolled out
Sometimes, a DaemonSet rolling update may be stuck. Here are some possible causes:
The rollout is stuck because new DaemonSet pods can't be scheduled on at least one node. This is possible when the node is running out of resources.
When this happens, find the nodes that don't have the DaemonSet pods scheduled on
by comparing the output of kubectl get nodes
and the output of:
kubectl get pods -l name=fluentd-elasticsearch -o wide -n kube-system
Once you've found those nodes, delete some non-DaemonSet pods from the node to make room for new DaemonSet pods.
Note: This will cause service disruption when deleted pods are not controlled by any controllers or pods are not replicated. This does not respect PodDisruptionBudget either.
If the recent DaemonSet template update is broken, for example, the container is crash looping, or the container image doesn't exist (often due to a typo), DaemonSet rollout won't progress.
To fix this, just update the DaemonSet template again. New rollout won't be blocked by previous unhealthy rollouts.
If .spec.minReadySeconds
is specified in the DaemonSet, clock skew between
master and nodes will make DaemonSet unable to detect the right rollout
progress.
Delete DaemonSet from a namespace :
kubectl delete ds fluentd-elasticsearch -n kube-system