You may find yourself wanting separate groups of worker nodes for your DKP Kubernetes Cluster. Perhaps you would like to isolate a workload to a specific set of nodes, or you are planning to add a limited amount of nodes with GPU Resources to your cluster. You may also want a different set of override values in separate sets of worker nodes. The example below will configure 2 separate sets of worker nodes of different sizes, where one of the worker node pools will have GPU resources.
We'll start by creating the environment variables for the resources that will be created:
export CLUSTER_NAME="cluster-a"
kubectl create secret generic $CLUSTER_NAME-ssh-key --from-file=ssh-privatekey=id_rsa
export CONTROL_PLANE_LB_ADDRESS=10.4.6.40
export CONTROL_PLANE_1_ADDRESS=10.4.6.41
export CONTROL_PLANE_2_ADDRESS=10.4.6.42
export CONTROL_PLANE_3_ADDRESS=10.4.6.43
export WORKER_1_ADDRESS=10.4.6.44
export WORKER_2_ADDRESS=10.4.6.45
export WORKER_3_ADDRESS=10.4.6.46
export WORKER_4_ADDRESS=10.4.6.47
export WORKER_5_ADDRESS=10.4.6.48
export WORKER_6_ADDRESS=10.4.6.49
export WORKER_7_ADDRESS=10.4.6.50
export WORKER_8_ADDRESS=10.4.6.51
export WORKER_9_ADDRESS=10.4.6.52
export WORKER_10_ADDRESS=10.4.6.53
export SSH_USER="twindebank"
export SSH_PRIVATE_KEY_SECRET_NAME="$CLUSTER_NAME-ssh-key"
When defining your infrastructure, you must specify an additional PreprovisionedInventory item for every additional nodepool you would like to deploy. Since the example have two worker node pools, additional PreprovisionedInventory object must be created when creating the preprovisioned_inventory.yaml:
cat <<EOF > preprovisioned_inventory.yaml
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-control-plane
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
# Create as many of these as needed to match your infrastructure
- address: $CONTROL_PLANE_1_ADDRESS
- address: $CONTROL_PLANE_2_ADDRESS
- address: $CONTROL_PLANE_3_ADDRESS
sshConfig:
port: 22
# This is the username used to connect to your infrastructure. This user must be root or
# have the ability to use sudo without a password
user: $SSH_USER
privateKeyRef:
# This is the name of the secret you created in the previous step. It must exist in the same
# namespace as this inventory object.
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-0
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_1_ADDRESS
- address: $WORKER_2_ADDRESS
- address: $WORKER_3_ADDRESS
- address: $WORKER_4_ADDRESS
- address: $WORKER_5_ADDRESS
- address: $WORKER_6_ADDRESS
- address: $WORKER_7_ADDRESS
- address: $WORKER_8_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
---
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedInventory
metadata:
name: $CLUSTER_NAME-md-1
namespace: default
labels:
cluster.x-k8s.io/cluster-name: $CLUSTER_NAME
clusterctl.cluster.x-k8s.io/move: ""
spec:
hosts:
- address: $WORKER_9_ADDRESS
- address: $WORKER_10_ADDRESS
sshConfig:
port: 22
user: $SSH_USER
privateKeyRef:
name: $SSH_PRIVATE_KEY_SECRET_NAME
namespace: default
EOF
Now, apply the preprovisioned_inventory.yaml to the bootstrap cluster:
kubectl apply -f preprovisioned_inventory.yaml
At this point you should configure any overrides you require for your Control Plane and Worker node pools, see this KB article for guidance on how to do so:
https://d2iqhelp.zendesk.com/knowledge/articles/4417088185236/en-us?brand_id=360005575372
The DKP cli will not automatically create additional nodepools from the PreprovisionedInventory objects defined earlier, and must be manually created.
To begin, generate a cluster.yaml via --dry-run:
./dkp create cluster preprovisioned --cluster-name ${CLUSTER_NAME} --control-plane-endpoint-host 10.4.6.40 --virtual-ip-interface ens192 --dry-run -o yaml > ${CLUSTER_NAME}.yaml
For each additional PreprovisionedInventory object you created, you must manually create the following objects in cluster.yaml:
kind: KubeadmConfigTemplate
kind: PreprovisionedMachineTemplate
kind: MachineDeployment
You can copy the objects of the type above that belong to the first worker group, it will normally be named $CLUSTER_NAME-md-0, in the example it will be cluster-a-md-0. One additional nodepool is needed so any reference to md-0 in the new objects needs to be edited to instead refer to md-1:
1st Worker Group:
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedMachineTemplate
metadata:
name: cluster-a-md-0
namespace: default
2nd Worker Group:
apiVersion: infrastructure.cluster.konvoy.d2iq.io/v1alpha1
kind: PreprovisionedMachineTemplate
metadata:
name: cluster-a-md-1
namespace: default
As a tip, you can do this quickly by copying the 3 types of objects for cluster-a-md-0 (KubeadmConfigTemplate,PreprovisionedMachineTemplate,MachineDeployment) into a new text editor document and then find and replace all mention of md-0 with md-1 to ensure you do not miss any values, then copy these objects back back into the cluster.yaml.
When editing the MachineDeployments, ensure you have the number of replicas set to the number of workers in this node pool. The example is only adding 2 nodes with GPU resources, so the replicas needs to be set for cluster-a-md-1 to a new value 2:
apiVersion: cluster.x-k8s.io/v1alpha4
kind: MachineDeployment
metadata:
name: cluster-a-md-1
spec:
replicas: 2
You can now apply your cluster.yaml and deploy your multi-nodepool cluster!