When deploying Azure-managed Kubernetes clusters (AKS) with DKP 2.X, the default VNET used is 10.0.0.0/8. There are scenarios where operators are in need of using a VNET other than 10.0.0.0/8.
To specify a custom CIDR for the VNET in Azure, the spec.virtualNetwork.cidrBlock in the object AzureManagedControlPlane should be adjusted:
kubectl explain AzureManagedControlPlane.spec.virtualNetwork.cidrBlock
KIND: AzureManagedControlPlane
VERSION: infrastructure.cluster.x-k8s.io/v1beta1
FIELD: cidrBlock <string>
DESCRIPTION:
<empty>
As there are no flags available in the DKP cli to modify the VNET CIDR in DKP 2.1 and 2.2, the operator must generate a cluster.yaml:
./dkp create cluster aks --cluster-name=<CLUSTER_NAME> --dry-run --output=yaml > cluster.yaml
and manually modify the parameter AzureManagedControlPlane.spec.virtualNetwork.cidrBlock:
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: AzureManagedControlPlane
metadata:
name: aks-konvoy2-custom-vnet
namespace: default
spec:
additionalTags:
konvoy.d2iq.io_cluster-name: aks-konvoy2-custom-vnet
konvoy.d2iq.io_version: v2.1.1
controlPlaneEndpoint:
host: ""
port: 0
loadBalancerSKU: Standard
location: westus
networkPlugin: azure
networkPolicy: calico
nodeResourceGroupName: <MC_aks-konvoy2-custom-vnet_westus>
resourceGroupName: aks-konvoy2-custom-vnet
sshPublicKey: <KEY>
subscriptionID: <subscriptionID>
version: v1.21.9
virtualNetwork:
cidrBlock: 10.2.0.0/16
name: <custom-vnet>
subnet:
cidrBlock: 10.2.1.0/24
name: aks-konvoy2-custom-vnet
---