Overview
The default kubelet and apiserver flags deployed with DKP are sufficient in many environments. Still, environments that require heavy customization or stricter security rules may require additional flags to be enabled. The easiest way to make these changes is by editing the cluster yaml that is generated by runningdkp create cluster <provider> -c dkp-testing --dry-run -o yaml > cluster.yaml
Editing the cluster objects
After generating your cluster YAML, there are quite a few objects created. For this example, we will focus on the KubeadmControlPlane and the KubeadmConfigTemplate objects. These control the kubelet configuration for both your control plane and worker nodes, respectively. Both of these objects are similar, with the primary difference being that the ControlPlane object will have both init and join configurations for kubelet, as well as configuration for the apiserver included in its .spec. Snippets of both objects have been included below for reference:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: dkp-testing-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
audit-log-path: /var/log/audit/kube-apiserver-audit.log
audit-policy-file: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
cloud-provider: aws
encryption-provider-config: /etc/kubernetes/pki/encryption-config.yaml
....
clusterConfiguration:
initConfiguration:
localAPIEndpoint: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
joinConfiguration:
discovery: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: dkp-testing-md-0
namespace: default
spec:
template:
spec:
...
joinConfiguration:
discovery: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
Editing these objects is straightforward; you only need to drop in your desired arguments for each section. In this case, we will alter the kube-api-burst setting for the Kubelet and the http2-max-streams-per-connection for the API server. For a complete list of available arguments, please visit the Kubernetes documentation for the Kubelet and Apiserver. First we will edit our join configuration for the control plane and worker nodes. To do this, lets navigate to both objects in our YAML and drop in the kube-api-burst setting:
KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: dkp-testing-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
...
initConfiguration:
localAPIEndpoint: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
kube-api-burst: "99"
joinConfiguration:
discovery: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
kube-api-burst: "99"
KubeadmConfigTemplate
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: dkp-testing-md-0
namespace: default
spec:
template:
spec:
...
joinConfiguration:
discovery: {}
nodeRegistration:
kubeletExtraArgs:
cloud-provider: aws
kube-api-burst: "99"
Now let's edit the Apiservers manifest, similar to the above, but all that we need to do is edit the extraArgs section of the ApiServer:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: dkp-testing-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
audit-log-path: /var/log/audit/kube-apiserver-audit.log
audit-policy-file: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
cloud-provider: aws
encryption-provider-config: /etc/kubernetes/pki/encryption-config.yaml
http2-max-streams-per-connection: "50"
...
Now all that is left is to create the cluster! Let's apply the cluster.yaml to our bootstrap by running kubectl create -f <filename> against the bootstrap or management cluster and validate our changes worked when the new cluster is fully deployed. After SSHing into a control-plane node, we can see that the kube-api-burst flag was properly applied to our Kubelet:
root@<IP>:/var/lib/kubelet# service kubelet status --no-page
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2023-02-03 19:12:08 UTC; 6min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 1774 (kubelet)
Tasks: 16 (limit: 18830)
Memory: 49.3M
CGroup: /system.slice/kubelet.service
└─1774 /usr/bin/kubelet ... --cloud-provider=aws --kube-api-burst=99
...
Likewise, let's validate that our Apiserver has the http2 flag we configured:
root@<IP>:/etc/kubernetes/manifests# cat kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: <IP>:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=<IP>
- --allow-privileged=true
- --http2-max-streams-per-connection=50
...
Editing the kubelet and ApiServer configurations is straightforward and provides flexibility for environments that require it. One thing to note is that while most flags are supported, enabling feature gates is not.