Issue Description:
Some users have reported that when they drain the worker node where Kibana is running, the Kibana pod starts crash looping and the following event is logged:
{"type":"log","@timestamp":"2021-03-11T21:59:56Z","tags":["warning","migrations"],"pid":7,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_2 and restarting Kibana."}
When Kibana detects that the data has been transformed, it starts a migration and points the index aliases .kibana and .kibana_task_manager to the new upgraded .kibana_N+1 index. If more than one Kibana instance is running, each instance will try to obtain a migration lock by creating a new .kibana_N+1 index but only one will succeed. The instance that fails to acquire the lock will log the aforementioned event.
Solution:
> kubectl -n kubeaddons exec --stdin --tty elasticsearch-kubeaddons-master-0 -- /bin/bashAnd then query the elasticsearch API to delete the index with the command:
> curl -X DELETE http://localhost:9200/.kibana_2In the next step, restart Kibana by scaling down/up the kibana-kubeaddons deployment:
> kubectl scale deployment kibana-kubeaddons -n kubeaddons --replicas 0 > kubectl scale deployment kibana-kubeaddons -n kubeaddons --replicas 1In production environments, you should take a snapshot of all .kibana* indices before draining the worker node where Kibana is running, so you can restore the kibana indices and their aliases should this issue be encountered.