In some cases, you may notice that some of the D2iQ default jobs are missing from the Prometheus configuration after applying custom scrape configurations. This is due to an upstream behavior in Helm where overriding an array object removes all previous values. The intention is to allow users to clear arrays if needed, but it has the unintended consequence of not allowing merges on multiple arrays. In the case of Prometheus, additional scrape configurations are merged in this manner, which causes some of the default jobs to be lost, specifically core Kubernetes component jobs that scrape the apiserver.
If you notice that some of your jobs are missing, you can use the workaround below to ensure that all of the default jobs are not lost when using overrides:
1. Validate if your Appdeployment has an override configmap associated with it, if it does not please follow our documentation on adding configOverrides:
kubectl get appdeployments.apps.kommander.d2iq.io -o yaml -n kommander kube-prometheus-stack
In this case, the configmap that contains our custom overrides is 'kube-prometheus-stack-overrides'.
2. After doing this, find the D2iQ default configmap associated with the application in question:
kubectl get hr -n kommander kube-prometheus-stack -o yaml
- kind: ConfigMap
3. Store the default configuration into a local file:
kubectl get cm -o yaml -n kommander DefaultConfigmapName > prometheus.yaml
4. After doing so, you will need to remove the UID, resourceVersion, Generation, and creationTimestamp fields, as well as change the .metadata.name field to whichever override name you have specified in your appDeployment resource. Once the unique identifiers for the original configmap are removed, you can now add your additional scrape configs to the relevant section of the yaml:
- job_name: "MyCustomJob"
- targets: ["localhost:9090"]
# Kubernetes API
- job_name: 'kubernetes-apiserver'
5. Now that the configmap has been stripped of its previous identifiers, filled in with the custom override name and configs all we need to do is reapply it using
kubectl apply -f prometheus.yaml.
After some time you will see the helmrelease reconcile. Once the helmrelease moves back into a ready state, you should be able to see both the kubernetes-apiserver job and your custom configuration both in the prometheus config tab in the UI. Please note: While the helmrelease may be ready, in some cases it may take some extra time for Prometheus to reload.