Issue:
Right now, when creating Kubernetes clusters with DKP vSphere provider, the flag “--virtual-ip-interface” is a required parameter. This parameter is used to specify the network interface to use by kube-vip. This means that kube-vip would be the only option to configure the control-plane endpoint.
--virtual-ip-interface string
The network interface, e.g, 'eth0' or 'ens5', to use for the built-in virtual IP
control plane endpoint. This interface must be available on every control plane
machine. If the value is empty, the flag does nothing. If the value is not empty,
the built-in virtual IP control plane endpoint is created, using values from
--control-plane-endpoint-host and --control-plane-endpoint-port.
When the flag is not included, the `dkp create cluster` command will fail and the following error is reported:
./dkp create cluster vsphere
--cluster-name sortega-vsphere-konvoy22
--server 10.0.0.9 --network "VM Network"
--resource-pool ResourcePool-1A
--data-center DC-HomeLab
--data-store datastore-2TB-NVME-2
--folder konvoy-capi-vsphere
--control-plane-replicas 3
--worker-replicas 4
--ssh-public-key-file /root/.ssh/id_rsa.pub
--vm-template konvoy-ova-vsphere-rhel-84-1.22.8-1656441018
--control-plane-endpoint-host 10.0.0.129
--tls-thumb-print=1F:76:9C:B2:93:70:51:34:9D:D2:67:10:96:FE:50:14:5E:3A:FC:A3
--dry-run -o yaml
required flag(s) "virtual-ip-interface" not set
Workaround:
There is a workaround that makes possible to use an external loadbalancer as control-plane endpoint rather than kube-vip. The operator could generate the cluster.yaml by specifying the below environment variables and executing the dkp create command:
# VSPHERE CREDENTIALS
export VSPHERE_SERVER="IPADDR"
export VSPHERE_USERNAME="<Username>"
export VSPHERE_PASSWORD="<Password>"
#RED HAT subscription credentials
export RHSM_USER="<Username>"
export RHSM_PASS="<Password>"
export CLUSTERNAME="<CLUSTER NAME>"
export SERVER="<IPADDR>"
export NETWORK="VM Network"
export RESOURCEPOOL="ResourcePool Name"
export DATACENTER="DataCenter Name"
export DATASTORE="datastore-name"
export FOLDER="Folder"
export SSHPUBLICKEY="/path/to/ssh-public-key"
export VMTEMPLATE="<Template Name>"
export CONTROLPLANEENDPOINT="IPADDR"
export VIRTUALIPINTERFACE="eth0"
export VCENTERTLSTHUMBPRINT="TLS Thumbprint"
export CONTROLPLANEREPLICAS=3
export WORKERNODESREPLICAS=4
# Creating the CLUSTER with CAPI VSphere Provider
./dkp create cluster vsphere --cluster-name ${CLUSTERNAME} --server ${SERVER} --network "VM Network" --resource-pool ${RESOURCEPOOL} --data-center ${DATACENTER} --data-store ${DATASTORE} --folder ${FOLDER} --control-plane-replicas ${CONTROLPLANEREPLICAS} --worker-replicas ${WORKERNODESREPLICAS} --ssh-public-key-file ${SSHPUBLICKEY} --vm-template ${VMTEMPLATE} --control-plane-endpoint-host ${CONTROLPLANEENDPOINT} --virtual-ip-interface ${VIRTUALIPINTERFACE} --tls-thumb-print=${VCENTERTLSTHUMBPRINT} --dry-run -o yaml > cluster.yaml
Then in the cluster.yaml point the control plane endpoint to the LB or External IP address by defining the spec.controlPlaneEndpoint.host in the VSphereCluster object:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereCluster
metadata:
name: sortega-vsphere-konvoy22
namespace: default
spec:
controlPlaneEndpoint:
host: <External IP Address/LB>
port: 6443
server: 10.0.0.9
thumbprint: 1F:76:9C:B2:93:70:51:34:9D:D2:67:10:96:FE:50:14:5E:3A:FC:A3
In addition to the aforementioned change, the spec.kubeadmConfigSpec.files for kube-vip in the “KubeadmControlPlane” object:
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: <cluster-name>-control-plane
namespace: default
spec:
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
audit-log-maxage: "30"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
audit-log-path: /var/log/audit/kube-apiserver-audit.log
audit-policy-file: /etc/kubernetes/audit-policy/apiserver-audit-policy.yaml
encryption-provider-config: /etc/kubernetes/pki/encryption-config.yaml
extraVolumes:
- hostPath: /etc/kubernetes/audit-policy/
mountPath: /etc/kubernetes/audit-policy/
name: audit-policy
- hostPath: /var/log/kubernetes/audit
mountPath: /var/log/audit/
name: audit-logs
controllerManager: {}
dns: {}
etcd:
local:
imageTag: 3.4.13-0
networking: {}
scheduler: {}
files:
- content: |
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- start
env:
- name: vip_arp
value: "true"
- name: vip_leaderelection
value: "true"
- name: vip_address
value: "10.0.0.170"
- name: vip_interface
value: "eth0"
- name: vip_leaseduration
value: "15"
- name: vip_renewdeadline
value: "10"
- name: vip_retryperiod
value: "2"
image: ghcr.io/kube-vip/kube-vip:v0.3.9
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_TIME
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/admin.conf
type: FileOrCreate
name: kubeconfig
status: {}
owner: root:root
path: /etc/kubernetes/manifests/kube-vip.yaml
Specifically by removing the value for path from:
path: /etc/kubernetes/manifests/kube-vip.yaml
to:
path: ""
The modified cluster.yaml can then be used to create the cluster:
kubectl create -f cluster.yaml
Please be mindful that as of now, DHCP is the only way to provision ip addresses for the cluster nodes, therefore you will need to wait until the first control plane node is provisioned to identify the ip address assigned to the node, and update the backend instances in the LB with the control plane ip addresses.