By default, DKP clusters that are created in AWS do not support logging in via SSH. Instead you would need to use the AWS SSM CLI:
To enable SSH access when you create a new cluster, you would first need to create an ssh keypair (or have an existing one that you wish to use).
Prepare a cluster configuration yaml file by performing a dry run, substituting your own options as needed. Adding the "--ssh-username" and "--ssh-public-key" flags will enable SSH access on your cluster nodes. Make sure to specify the location of your public key file:
export CLUSTER_NAME=<desired-cluster-name>
./dkp create cluster aws --cluster-name=${CLUSTER_NAME} --ssh-public-key-file=/home/sly/.ssh/id_rsa.pub --ssh-username=sly --dry-run --output=yaml > ${CLUSTER_NAME}.yaml
Once this <cluster-name>.yaml file is created, you will need to edit it to ensure that the AWSCluster resource adds a security group rule that will open port 22. Find the AWSCluster block and include the following under spec.network.cni.cniIngressRules:
- description: ssh
fromPort: 22
protocol: tcp
toPort: 22
This screenshot shows an example of where this needs to go:
Once the file is edited, you can then follow the usual steps to create a bootstrap cluster and use it to create your workload cluster:
./dkp create bootstrap
kubectl create -f ${CLUSTER_NAME}.yaml
watch ./dkp describe cluster --cluster-name $CLUSTER_NAME
# Wait for everything to be ready
./dkp get kubeconfig -c $CLUSTER_NAME > ${CLUSTER_NAME}.conf
./dkp create capi-components --with-aws-bootstrap-credentials=false --kubeconfig ${CLUSTER_NAME}.conf
./dkp move capi-resources --to-kubeconfig ${CLUSTER_NAME}.conf
watch ./dkp describe cluster --kubeconfig ${CLUSTER_NAME}.conf -c ${CLUSTER_NAME}
# Wait for everything to be ready
./dkp delete bootstrap