Can't delete base VM for node in Kubernet
I am running three node clusters on GCE. I want to merge one node and delete the base VM.
The documentation for the kubectl command drain
says:
Once it returns (without giving an error), you can power down the node (or equivalently, if on a cloud platform, delete the virtual machine backing the node)
I am running the following commands:
-
Get nodes
$ kl get nodes NAME STATUS AGE gke-jcluster-default-pool-9cc4e660-6q21 Ready 43m gke-jcluster-default-pool-9cc4e660-rx9p Ready 6m gke-jcluster-default-pool-9cc4e660-xr4z Ready 23h
-
Merge node
rx9p
.$ kl drain gke-jcluster-default-pool-9cc4e660-rx9p --force node "gke-jcluster-default-pool-9cc4e660-rx9p" cordoned WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: fluentd-cloud-logging-gke-jcluster-default-pool-9cc4e660-rx9p, kube-proxy-gke-jcluster-default-pool-9cc4e660-rx9p node "gke-jcluster-default-pool-9cc4e660-rx9p" drained
-
Remove gcloud VM.
$ gcloud compute instances delete gke-jcluster-default-pool-9cc4e660-rx9p
-
List of virtual machines.
$ gcloud compute instances list
As a result, I see that I have deleted the virtual machine above -
rx9p
. If I dokubectl get nodes
I can see rx9p node too.
What's happening? Is something restarting the VM I'm deleting? Do I have to wait for some timeout between teams?
source to share
You are on the right track by first pulling down node.
Nodes (compute instances) are part of a group of managed instances . If you delete them with just a command gcloud compute instances delete
, the Managed Instances Group will recreate them.
To properly remove this command (after removing it):
gcloud compute instance-groups managed delete-instances \
gke-jcluster-default-pool-9cc4e660-grp \
--instances=gke-jcluster-default-pool-9cc4e660-rx9p \
--zone=...
source to share