Black lives matter.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
We stand in solidarity with the Black community.
Racism is unacceptable.
It conflicts with the core values of the Kubernetes project and our community does not tolerate it.
Kubernetes v1.16 [alpha]
IPv4/IPv6 dual-stack enables the allocation of both IPv4 and IPv6 addresses to PodsA Pod represents a set of running containers in your cluster. and ServicesA way to expose an application running on a set of Pods as a network service. .
If you enable IPv4/IPv6 dual-stack networking for your Kubernetes cluster, the cluster will support the simultaneous assignment of both IPv4 and IPv6 addresses.
Enabling IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
The following prerequisites are needed in order to utilize IPv4/IPv6 dual-stack Kubernetes clusters:
To enable IPv4/IPv6 dual-stack, enable the IPv6DualStack
feature gate for the relevant components of your cluster, and set dual-stack cluster network assignments:
--feature-gates="IPv6DualStack=true"
--feature-gates="IPv6DualStack=true"
--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>
--service-cluster-ip-range=<IPv4 CIDR>,<IPv6 CIDR>
--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6
defaults to /24 for IPv4 and /64 for IPv6--feature-gates="IPv6DualStack=true"
--cluster-cidr=<IPv4 CIDR>,<IPv6 CIDR>
--feature-gates="IPv6DualStack=true"
Note: An example of an IPv4 CIDR:10.244.0.0/16
(though you would supply your own address range) An example of an IPv6 CIDR:fdXY:IJKL:MNOP:15::/64
(this shows the format but is not a valid address - see RFC 4193)
If your cluster has IPv4/IPv6 dual-stack networking enabled, you can create ServicesA way to expose an application running on a set of Pods as a network service.
with either an IPv4 or an IPv6 address. You can choose the address family for the Service's cluster IP by setting a field, .spec.ipFamily
, on that Service.
You can only set this field when creating a new Service. Setting the .spec.ipFamily
field is optional and should only be used if you plan to enable IPv4 and IPv6 ServicesA way to expose an application running on a set of Pods as a network service.
and IngressesAn API object that manages external access to the services in a cluster, typically HTTP.
on your cluster. The configuration of this field not a requirement for egress traffic.
Note: The default address family for your cluster is the address family of the first service cluster IP range configured via the--service-cluster-ip-range
flag to the kube-controller-manager.
You can set .spec.ipFamily
to either:
IPv4
: The API server will assign an IP from a service-cluster-ip-range
that is ipv4
IPv6
: The API server will assign an IP from a service-cluster-ip-range
that is ipv6
The following Service specification does not include the ipFamily
field. Kubernetes will assign an IP address (also known as a "cluster IP") from the first configured service-cluster-ip-range
to this Service.
service/networking/dual-stack-default-svc.yaml
|
---|
|
The following Service specification includes the ipFamily
field. Kubernetes will assign an IPv6 address (also known as a "cluster IP") from the configured service-cluster-ip-range
to this Service.
service/networking/dual-stack-ipv6-svc.yaml
|
---|
|
For comparison, the following Service specification will be assigned an IPv4 address (also known as a "cluster IP") from the configured service-cluster-ip-range
to this Service.
service/networking/dual-stack-ipv4-svc.yaml
|
---|
|
On cloud providers which support IPv6 enabled external load balancers, setting the type
field to LoadBalancer
in additional to setting ipFamily
field to IPv6
provisions a cloud load balancer for your Service.
The use of publicly routable and non-publicly routable IPv6 address blocks is acceptable provided the underlying CNIContainer network interface (CNI) plugins are a type of Network plugin that adheres to the appc/CNI specification. provider is able to implement the transport. If you have a Pod that uses non-publicly routable IPv6 and want that Pod to reach off-cluster destinations (eg. the public Internet), you must set up IP masquerading for the egress traffic and any replies. The ip-masq-agent is dual-stack aware, so you can use ip-masq-agent for IP masquerading on dual-stack clusters.