OverviewIf you attempt to upgrade a V1.7.0 or V1.7.1 FIPS Mode Konvoy cluster to 1.7.2, the upgrade will fail with the following message:
[upgrade/apply] FATAL: failed comparing the current etcd version "v3.4.13.fips" to the desired one "3.4.13+fips": could not parse pre-release/metadata (.fips) in version "v3.4.13.fips"
This problem occurs because of a change in the naming conventions for the special etcd version used for FIPS mode. In a future release of Konvoy, the upgrade automation has been enhanced to handle these changes, but for Konvoy 1.7.2, you must perform the steps below.
Note that you can perform these steps before attempting the upgrade, and avoid the failure mentioned above.
SolutionTo complete the upgrade after experiencing this failure, you must perform some steps manually. First, edit your cluster.yaml file, and change the version/imageTag/configVersion values to match those specified in this snippet:
kind: ClusterProvisioner ... spec: ... version: v1.7.2 --- kind: ClusterConfiguration ... spec: ... kubernetes: version: 1.19.9+fips.1 imageRepository: mesosphere ... etcd: imageRepository: mesosphere imageTag: v3.4.13_fips containerNetworking: calico: version: v3.17.3 ... addons: - configRepository: https://github.com/mesosphere/kubernetes-base-addons configVersion: stable-1.19-3.4.1 addonsList: ... - configRepository: https://github.com/mesosphere/kubeaddons-dispatch configVersion: stable-1.19-1.4.2 addonsList: - name: dispatch enabled: false - configRepository: https://github.com/mesosphere/kubeaddons-kommander configVersion: stable-1.19-1.3.2 addonsList: - name: kommander enabled: true version: v1.7.2
After making those changes, you will need to perform some steps on each of the control plane nodes. If you have Ansible available on the machine you are using to upgrade Konvoy you should follow the 'With Ansible' steps below. If you do not (or cannot) use Ansible, skip to the 'Without Ansible' section below.
If you have Ansible in your environment, run the following command from your Konvoy working directory:
ANSIBLE_HOST_KEY_CHECKING=false ansible -i inventory.yaml control-plane --become -m shell -a "crictl rmi mesosphere/etcd:v3.4.13.fips" ANSIBLE_HOST_KEY_CHECKING=false ansible -i inventory.yaml control-plane --become -m shell -a "crictl pull mesosphere/etcd:v3.4.13_fips" ANSIBLE_HOST_KEY_CHECKING=false ansible -i inventory.yaml control-plane --become -m shell -a "sed -i 's/v3.4.13.fips/v3.4.13_fips/g' /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/kubeadm-init-config.yaml"If this command fails, you may need to update your Ansible ssh private key file path. To update it, edit the `inventory.yaml` file and update the all.vars.ansible_ssh_private_key_file value with the correct path for your konvoy ssh key:
all.vars.ansible_ssh_private_key_file: "/path/to/my_konvoy_ssh_key.pem"Then try running the Ansible commands again.
Without AnsibleIf you do not wish to use (or cannot use) Ansible, perform the following steps manually on each of your control plane nodes:
crictl rmi mesosphere/etcd:v3.4.13.fips crictl pull mesosphere/etcd:v3.4.13_fips sed -i 's/v3.4.13.fips/v3.4.13_fips/g' /etc/kubernetes/manifests/etcd.yaml /etc/kubernetes/kubeadm-init-config.yaml
After performing the above steps on the control plane nodes, you can complete the upgrade by performing the documented V1.7.2 upgrade steps with the exception of updating the cluster.yaml, as that step was completed above.