Customer Advisory
Advisory ID: | D2IQ-2020-0009 |
---|---|
Severity: | Critical |
Synopsis: | Docker Hub will start to rate limit anonymous image pulls from Monday 2020-11-02 9AM PST onwards. Without authenticating with Docker Hub, the rate limit is 100 pull/6h/source IP. With this limit applied, in some scenarios, users may experience service interruptions due to failures related to pulling images. |
Affected Products & Versions |
Mesosphere DC/OS - All versions Kommander - All versions Konvoy - All versions Dispatch - All versions Kaptain - All versions Conductor - All versions KUDO - All versions |
Issue date: | 10-30-2020 |
Updated on: | 11-03-2020 |
Problem Description
Docker Hub announced an update to their image pull policies in August 2020:
- Free plan – anonymous users: 100 pulls per 6 hours
- Free plan – authenticated users: 200 pulls per 6 hours
- Pro plan – unlimited
- Team plan – unlimited
Rate limiting happens on a per-pull basis regardless of whether the pulled image is owned by a paid user. This means D2iQ, as owner of most images used in Mesosphere® DC/OS® and D2iQ® Kubernetes Platform (DKP) products, does not have any influence as to whether your current address is rate-limited or not. Mesosphere DC/OS and DKP products do not have a strict dependency on Docker Hub accounts or plans.
Without any further configuration, your cluster is likely using the “Free plan - anonymous users” tier. This means that if each of your nodes has its own public IP address, you can do 100 image pulls per 6 hours on every node. While this should not be a problem with just a few, usually-healthy workloads, you may encounter unforeseeable issues if you:
- Have constantly failing tasks
- Are running a large number of CI jobs
- Have metronome tasks with different containers
- Use the docker.forcePullImage parameter
In the worst case, your cluster might not be able to reschedule a failed task for up to 6 hours, which could lead to unresponsive services or even data corruption, for example, when using clustered databases.
Solutions
DC/OS:
There are two options we can suggest for avoiding any problems on DC/OS as result of this limitation.
Using Docker Hub credentials with a paid plan:
To avoid issues related to image pull rate limits, DC/OS nodes should use a Docker Hub account with an assigned, paid plan. For more details see: https://www.docker.com/pricing
The DC/OS software offers several ways to specify these credentials:
- Cluster-wide credentials using dcos-config.yml: https://docs.d2iq.com/mesosphere/dcos/2.1/deploying-services/private-docker-registry/#using-cluster-docker-credentials-to-set-cluster-wide-registry-credentials
- Task-specific credentials using secrets: https://docs.d2iq.com/mesosphere/dcos/2.1/deploying-services/private-docker-registry/#reference-private-docker-registry-credentials-in-dcos-secrets-enterprise
A non-DC/OS-specific way to specify the Docker credentials is to use the .docker/config.json file on each agent, as described here: https://docs.d2iq.com/mesosphere/dcos/2.1/deploying-services/private-docker-registry/#create-a-docker-credentials-configuration-file
Note: For MKE users, you can create a custom Docker daemon config.json file to be used by a Kubernetes cluster framework. To do so, you would create your config.json file and then save it as a file-based secret. After creating the secret, you can specify the secret name as the .kubernetes.docker_daemon_config value in your options.json file for the Kubernetes cluster. This feature is only supported in MKE versions 2.8.0-1.19.2 and newer. See the following documentation for more information: https://docs.d2iq.com/mesosphere/dcos/services/kubernetes/2.8.0-1.19.2/operations/docker-custom-config/
Using a private Docker registry:
Every image being used by DC/OS must be copied to this registry and whenever an image is specified in a DC/OS task or package you should replace it with the URL of your private registry. Aside from self-hosted solutions like Docker registry v2, Artifactory, Harbor, or Pier One, most cloud providers offer their own solutions such as AWS ECR, Google Container Registry, Azure Container Registry, and many more. For more details please refer to the project's documentation.
DKP:
DKP customers can configure their cluster to authenticate with registries (such as Docker Hub), as well as to add additional registries. You must first configure ClusterConfiguration .spec.imageRegistries list in the cluster.yaml file, an example is shown below. If this cluster has already been created, you must also manually edit /etc/containerd/config.toml on each control plane and worker node, then restart containerd. Please note that this will restart all containers on the host.
kind: ClusterConfiguration apiVersion: konvoy.mesosphere.io/v1beta2 metadata: name: creationTimestamp: spec: imageRegistries: - server: https://registry-1.docker.io username: "" password: "" autoProvisioning: config: webhook: extraArgs: konvoy.docker-registry-url: https://registry-1.docker.io konvoy.docker-registry-username: "" konvoy.docker-registry-password: ""
If you are using the Kommander addon, you must also configure it for your docker hub credentials:
- name: kommander enabled: true values: | kommander-federation: utilityApiserver: extraArgs: docker-registry-url: https://registry-1.docker.io docker-registry-username: "" docker-registry-password: ""
Note that you can also specify the values for imageRegistries as environment variables. For example when the file contains "password: ${REGISTRY_PASSWORD}", the value of password will be set to the value of the REGISTRY_PASSWORD in your environment.
To begin apply the changes to your cluster, execute the `konvoy up` command. After it has finished, check the contents of the containerd configuration file located at /etc/containerd/config.toml on each host in your cluster. You will notice that if you had previously run konvoy up, this file will not include your changes made to cluster.yaml, but you can use the below snippet as a reference to manually add your docker hub credentials:
$ cat /etc/containerd/config.toml ... [plugins."io.containerd.grpc.v1.cri".registry] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.configs."registry-1.docker.io".auth] username = "" password = "" auth = "" identitytoken = "" ...
For Kommander, check the kommander-federation-utility-apiserver's containerd configuration:
kubectl get pods -n kommander kubectl describe pod -n kommander kommander-federation-utility-apiserver-777c9ddf7d-86mqz
Containers: server: Container ID: containerd://cf983ee2e19dea898d75a3bdaf120475de659f253332216ebeba9498dfbb7f5e Image: mesosphere/kommander-federation-utility-apiserver:v0.6.9 Image ID: docker.io/mesosphere/kommander-federation-utility-apiserver@sha256:a880e3f7ef9fc721deeb1bdb76f97d43c4d9acb9985f4233da1786cbb1d1413f Port: 8443/TCP Host Port: 0/TCP Args: --port=8443 --host=0.0.0.0 --cert-dir=/tmp/kommander-federation-utility-apiserver/serving-certs --allow-unofficial-releases=false --minimum-kubernetes-version=1.16.0 --docker-registry-password= --docker-registry-url=https://registry-1.docker.io --docker-registry-username=
For more information on configuring imageRegistries in the cluster.yaml file, refer to the following documentation: https://docs.d2iq.com/ksphere/konvoy/1.5/reference/cluster-configuration/v1beta2/