When upgrading a Kommander cluster and waiting for each component to start up, you may notice that the 'kube-oidc-proxy' helmrelease is in a "Not Ready" state. Showing the helmrelease status shows something that resembles the following:
$ kubectl get helmreleases -n kommander
NAME AGE READY STATUS
cluster-observer-2360587938 268d True Release reconciliation succeeded
dex 268d True Release reconciliation succeeded
dex-k8s-authenticator 268d True Release reconciliation succeeded
dkp-insights-management 268d True Release reconciliation succeeded
external-dns 260d True Release reconciliation succeeded
gatekeeper 268d True Release reconciliation succeeded
gatekeeper-proxy-mutations 268d True Release reconciliation succeeded
gitea 268d True Release reconciliation succeeded
grafana-logging 268d True Release reconciliation succeeded
grafana-loki 33m False Release reconciliation succeeded
karma-traefik-certs 268d True Release reconciliation succeeded
kommander 268d True Release reconciliation succeeded
kommander-appmanagement 268d True Release reconciliation succeeded
kube-oidc-proxy 268d False could not find ConfigMap 'kommander/kube-oidc-proxy-overrides'
kube-prometheus-stack 268d True Release reconciliation succeeded
kubecost-traefik-certs 268d True Release reconciliation succeeded
kubefed 268d True Release reconciliation succeeded
kubernetes-dashboard 268d True Release reconciliation succeeded
kubetunnel 268d True Release reconciliation succeeded
logging-operator 268d True Release reconciliation succeeded
logging-operator-logging 268d True Release reconciliation succeeded
minio-operator 268d True Release reconciliation succeeded
minio-tenant 268d True Release reconciliation succeeded
prometheus-adapter 268d True Release reconciliation succeeded
prometheus-traefik-certs 268d True Release reconciliation succeeded
reloader 268d True Release reconciliation succeeded
traefik 268d True Release reconciliation succeeded
traefik-forward-auth-mgmt 268d True Release reconciliation succeeded
velero 268d True Release reconciliation succeeded
This is related to a configmap name change that was not accounted for in clusters that have been upgraded to 2.3+ that were originally on much older versions of Konvoy before eventually arriving at this version.
The workaround to this known issue is to patch the kube-oidc-proxy helmrelease to not require the obsolete "kube-oidc-proxy-overrides" configmap:
kubectl -n kommander patch appdeployment kube-oidc-proxy --type=json -p '[{"op":"remove","path":"/spec/configOverrides"}]'
This command should result in a new helmrelease being created and kube-oidc-proxy should start up successfully.