Many konvoy's kubeaddons create persistent volumes in AWS.
dkp2 by default creates gp3-type volumes,
while the konvoy1 creates gp2-type volumes by default.
Generally, gp3 volumes are cheaper and have better IO performance than gp2 volumes.
So, you may want to use gp3-type volumes for konvoy1 clusters instead of gp2 volumes.
Then you initially deploy a konvoy 1.8 cluster, you may configure your awsebscsiprovisioner addon to use gp3-type volumes instead of default gp2.
To do this, you have to add several lines into your cluster.yaml:
- name: awsebscsiprovisioner
enabled: true
values: |
storageclass:
isDefault: true
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
type: gp3
fstype: ext4
iopsPerGB: null
encrypted: false
kmsKeyId: null
allowedTopologies: []
allowVolumeExpansion: true
This approach works only once during the first konvoy cluster's deployment.
You can't change a storageclass in a running cluster later since the storageclass is immutable.
If you want to use gp3-volumes for new deployments in the already running cluster, you have to create an additional storageclass with the gp3 type.
For example, you can create the storageclass like this (my-sc-aws-gp3.yaml):
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: my-sc-aws-gp3
parameters:
type: gp3
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
and apply it with 'kubectl apply -f my-sc-aws-gp3.yaml'
Then you may explicitly use the new storageclass for new deployments.
For example, you may create a PVC, which uses this new storageclass like this (my-pvc-aws-gp3.yaml):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc-aws-gp3
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: my-sc-aws-gp3
volumeMode: Filesystem
Then you're ready to use gp3 volumes.
For example, you may create a pod that uses the mentioned above PVC, like this (my-nginx.yaml):
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypv
volumes:
- name: mypv
persistentVolumeClaim:
claimName: my-pvc-aws-gp3
After the start of the pod, you can see in the output of the command 'kubectl get pv' that the PV is bounded to the new storage class:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
pvc-b5947eea-edec-47c0-a386-a85ed722a709 2Gi RWO Delete Bound default/my-pvc-aws-gp3 my-sc-aws-gp3
You may find the AWS' volume ID looking at the field "spec.csi.volumeHandle" from the output of command 'kubectl get pv <mypv> -o yaml',
and you may check the type of the created volume in the AWS EC2 Console,
and see it's gp3 now.
Also, you may switch the default storageclass from the "awsebscsiprovisioner" to the newly created "my-sc-aws-gp3", in order to not specify the new stortageclass explicitly.
This can be done by editing the "is-default-class" annotations for the storageclasses:
kubectl patch storageclass awsebscsiprovisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass my-sc-aws-gp3 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
After that, if you deploy a new previously not-installed addons with
'konvoy deploy addons'
the new gp3-type storageclass will be used automatically for addons that create additional volumes in AWS.
As said above, it all will work for new deployments;
already existing PVs can be converted into gp3 type using internal AWS command like this:
aws ec2 modify-volume –volume-id vol-004ded8bb1fc4c14c –volume-type gp3
This 'aws ec2 modify-volume' command can work online; the data will be saved.