A common tool for understanding the resources that are being used in your cluster is "kubectl top node".
This command will give you an overview of each of your Kubernetes nodes and the resource usage on them.
Some users have reported the MEMORY% value as being close to or over 100% and sought clarification on what that means:
For example:
The memory % value reflects the memory usage reported by cadvisor on the node.
If you are seeing this value go up to and over 100%, then it means that your workloads are trying to use more memory than is available.
This can result in the out-of-memory killer (oom-kill) in your Operating System killing processes, and you might observe other unpredictable behaviors.
Your options in that case would be to reduce the amount of memory that your workloads are using, or increase the memory (or amount of nodes) in your cluster.
For more information about these values, here are some links to discussions that might prove helpful:
https://stackoverflow.com/questions/45043489/kubernetes-understanding-memory-usage-for-kubectl-top-node
https://github.com/kubernetes-sigs/metrics-server/issues/193
This command will give you an overview of each of your Kubernetes nodes and the resource usage on them.
Some users have reported the MEMORY% value as being close to or over 100% and sought clarification on what that means:
For example:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kube-node-0-kubelet 1272m 18% 19012Mi 79% kube-node-1-kubelet 1618m 23% 24518Mi 102% kube-node-2-kubelet 1676m 23% 23121Mi 96% kube-node-3-kubelet 1436m 20% 22058Mi 91% kube-node-4-kubelet 1702m 24% 24100Mi 100% kube-node-5-kubelet 1950m 27% 22887Mi 95%
The memory % value reflects the memory usage reported by cadvisor on the node.
If you are seeing this value go up to and over 100%, then it means that your workloads are trying to use more memory than is available.
This can result in the out-of-memory killer (oom-kill) in your Operating System killing processes, and you might observe other unpredictable behaviors.
Your options in that case would be to reduce the amount of memory that your workloads are using, or increase the memory (or amount of nodes) in your cluster.
For more information about these values, here are some links to discussions that might prove helpful:
https://stackoverflow.com/questions/45043489/kubernetes-understanding-memory-usage-for-kubectl-top-node
https://github.com/kubernetes-sigs/metrics-server/issues/193