If after executing 'kubectl pod delete' a pod stuck in the terminating state and on the node, where the pod was running, there are kubelet's errors regarding unmounting secrets' or tokens' volumes like this:
Jun 10 08:11:03 ehl-worker-3.edison kubelet[2227]: E0610 08:11:03.936999 2227 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/112295a6-27b6-4ae0-9443-863bfe00044f-post-delete-sa-token-rdzrj podName:112295a6-27b6-4ae0-9443-863bfe00044f nodeName:}" failed. No retries permitted until 2021-06-10 08:13:05.936960896 +0000 UTC m=+86972.055571503 (durationBeforeRetry 2m2s). Error: "UnmountVolume.TearDown failed for volume \"post-delete-sa-token-rdzrj\" (UniqueName: \"kubernetes.io/secret/112295a6-27b6-4ae0-9443-863bfe00044f-post-delete-sa-token-rdzrj\") pod \"112295a6-27b6-4ae0-9443-863bfe00044f\" (UID: \"112295a6-27b6-4ae0-9443-863bfe00044f\") : unlinkat /var/lib/kubelet/pods/112295a6-27b6-4ae0-9443-863bfe00044f/volumes/kubernetes.io~secret/post-delete-sa-token-rdzrj: device or resource busy"
the possible solution against this problem on Centos7/Redhat7 systems with the Linux kernel 3.10 can be setting the fs.may_detach_mounts to 1:
echo 1 > /proc/sys/fs/may_detach_mounts echo fs.may_detach_mounts=1 >> /etc/sysctl.conf sysctl -p
Details about the issue can be found here:
https://bugzilla.redhat.com/show_bug.cgi?id=1823374