Persistent Volume Cleanup on Konvoy
Warning: Konvoy utilizes persistent volumes to back up information important to the cluster as well as the services running on the cluster. Never manually delete the contents of a persistent volume unless you are sure you no longer require it, as it cannot be restored.Konvoy recommends 3 persistent volumes with at least 55gb of storage for each worker node in your cluster. https://docs.d2iq.com/ksphere/konvoy/latest/install/install-onprem/#worker-nodes.
If you're deploying Konvoy in an on-prem environment, you must ensure that Konvoy starts with clean persistent volume each time you deploy or you may experience issues with addons such as Velero.
How do you know if you have an issue with Velero if the deployment did not complete?
From the directory that you ran konvoy up from, you can run the following command to access kubectl:
export KUBECONFIG=$(pwd)/admin.confThen you can run kubectl commands against your cluster to list the velero pods:
NAME READY STATUS RESTARTS AGE minio-0 0/1 Error 6 6m49s minio-1 0/1 CrashLoopBackOff 6 6m49s minio-2 0/1 CrashLoopBackOff 6 6m49s minio-3 1/1 Running 1 6m49s velero-kubeaddons-5f9dc9c54b-pzml5 0/1 Init:0/1 1 7m53sOnce we know the names of all the pods, we can use kubectl to check the logs for useful information:
kubectl logs -p minio-0 -n velero You are running an older version of MinIO released 4 months ago Update: https://docs.min.io/docs/deploy-minio-on-kubernetes ERROR Unable to initialize backend: Disk https://minio-0.minio-hl-svc.velero.svc.cluster.local:9000/export: corrupted backend format, please join https://slack.min.io for assistance.So we can see that minio was unable to initialize disk on at least one of its pods. Checking the others and confirming that they have the same error, we know that our next steps are to run konvoy down and then clean the persistent volumes on our workers.
After konvoy down completes, ssh into each worker and run:
lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 100G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 99G 0 part ├─centos-root 253:0 0 94G 0 lvm / └─centos-home 253:1 0 5G 0 lvm /home sdb 8:16 0 100G 0 disk /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd sdc 8:32 0 100G 0 disk /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826 sdd 8:48 0 100G 0 disk /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16 sr0 11:0 1 1024M 0 romFor this host, there are 3 disks set up on each worker for konvoy, sdb, sdc and sdd. You will want to confirm the names of your disks and their configuration before continuing.
We can see that the only contents of the /mnt/disks directory are the 3 disks set up for konvoy:
cd /mnt/disks ls -la total 0 drwxr-xr-x. 5 root root 138 Dec 22 05:36 . drwxr-xr-x. 3 root root 19 Dec 22 05:35 .. drwxr-xr-x 4 root root 38 Jan 13 16:07 a2dc2d04-f13a-404c-9ace-db1d83c3a826 drwxrwsr-x 3 twindebank 1000 19 Jan 13 17:09 b2dd935b-189d-4f51-9f6b-a34e84ecc9dd drwxrwsr-x 4 twindebank 1000 40 Jan 13 16:58 fe17b104-1eb6-4bfc-b314-f1f9b0f81c16This allows us to search all disks at the same time via:
cd /mnt/disks sudo find /mnt/disks -mindepth 2 /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes/0 /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes/0/node.lock /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes/0/_state /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes/0/_state/node-0.st /mnt/disks/b2dd935b-189d-4f51-9f6b-a34e84ecc9dd/nodes/0/_state/global-0.st /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0 /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/node.lock /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/_state /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/_state/node-0.st /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/_state/global-1.st /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/1ZFzG_EpRhGdelbQC2fy-Q /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/1ZFzG_EpRhGdelbQC2fy-Q/_state /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/1ZFzG_EpRhGdelbQC2fy-Q/_state/state-22.st /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/zWbhF-fDTi69wgr8wBeo9A /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/zWbhF-fDTi69wgr8wBeo9A/_state /mnt/disks/a2dc2d04-f13a-404c-9ace-db1d83c3a826/nodes/0/indices/zWbhF-fDTi69wgr8wBeo9A/_state/state-4.st /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/multipart /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/format.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/config.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/config.json/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/config.json/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/iam /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/iam/format.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/iam/format.json/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/config/iam/format.json/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/.minio.sys/tmp /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-logs.gz /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-logs.gz/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-logs.gz/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-backup.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-backup.json/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-backup.json/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831.tar.gz /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831.tar.gz/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831.tar.gz/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-volumesnapshots.json.gz /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-volumesnapshots.json.gz/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/backups/velero-kubeaddons-default-20200113210831/velero-kubeaddons-default-20200113210831-volumesnapshots.json.gz/xl.json /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/metadata /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/metadata/revision /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/metadata/revision/part.1 /mnt/disks/fe17b104-1eb6-4bfc-b314-f1f9b0f81c16/velero/metadata/revision/xl.jsonReviewing the contents we see that there are still items left over from our recent konvoy down. This will block minio from successfully deploying to these disks should we want to redeploy Konvoy. To fix this we can run:
sudo find /mnt/disks -mindepth 2 -deleteThis will clean all disks in the /mnt/disks directory and make them ready for use. Perform these steps on each worker to enable konvoy up to complete successfully.