Problem
In some cases, you may notice that some of the DKP default jobs are missing from the Prometheus configuration after applying custom scrape configurations. This is due to an upstream behavior in Helm where overriding an array object removes all previous values. The intention is to allow users to clear arrays if needed, but it has the unintended consequence of not allowing merges on multiple arrays. In the case of Prometheus, additional scrape configurations are merged in this manner, which causes some of the default jobs to be lost, specifically core Kubernetes component jobs that scrape the apiserver.
Solution
If you notice that some of your jobs are missing, you can use the workaround below to ensure that all of the default jobs are not lost when using overrides:
1. Validate if your Appdeployment has an override configmap associated with it, if it does not please follow our documentation on adding configOverrides:
kubectl get appdeployments.apps.kommander.d2iq.io -o yaml -n kommander kube-prometheus-stack
apiVersion: apps.kommander.d2iq.io/v1alpha3
kind: AppDeployment
metadata:
finalizers:
- kommander.mesosphere.io/appdeployment
name: kube-prometheus-stack
namespace: kommander
spec:
appRef:
kind: ClusterApp
name: kube-prometheus-stack-44.2.1
configOverrides:
name: kube-prometheus-stack-overrides
In this case, the configmap that contains our custom overrides is 'kube-prometheus-stack-overrides'.
2. After doing this, find the DKP default configmap associated with the application in question:
kubectl get hr -n kommander kube-prometheus-stack -o yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: kube-prometheus-stack
namespace: kommander
...
spec:
chart:
spec:
chart: kube-prometheus-stack
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: mesosphere.github.io-charts-staging
namespace: kommander-flux
...
valuesFrom:
- kind: ConfigMap
name: kube-prometheus-stack-44.2.1-d2iq-defaults
3. Store the default configuration into a local file:
kubectl get cm -o yaml -n kommander DefaultConfigmapName > prometheus.yaml
4. After doing so, you will need to remove the UID, resourceVersion, Generation, and creationTimestamp fields, as well as change the .metadata.name field to whichever override name you have specified in your appDeployment resource. Once the unique identifiers for the original configmap are removed, you can now add your additional scrape configs to the relevant section of the yaml:
...
additionalScrapeConfigs:
- job_name: "MyCustomJob"
static_configs:
- targets: ["localhost:9090"]
# Kubernetes API
- job_name: 'kubernetes-apiserver'
...
5. Now that the configmap has been stripped of its previous identifiers, filled in with the custom override name and configs all we need to do is reapply it using kubectl apply -f prometheus.yaml
.
After some time you will see the helmrelease reconcile. Once the helmrelease moves back into a ready state, you should be able to see both the kubernetes-apiserver job and your custom configuration both in the prometheus config tab in the UI. Please note: While the helmrelease may be ready, in some cases it may take some extra time for Prometheus to reload.