How to restart a single live node from a cassandra multi node cluster?

I have a production cassandra cluster of 6 nodes. I made some changes to the cassandra.yaml file per node and hence you need to restart it. How can I do this without losing data or causing cluster-related issues? Can I just kill the cassandra process on that specific node and start it again. Cluster information: 6 nodes. All active. I am using AWS Ec2Snitch.

Thank.

+3


source to share


2 answers


If you are using a replication factor greater than 1 and not using ALL the read / write consistency setting, you can follow the steps below without wasting time / data loss. In case you have one of the above limitations, you will need to increase the replication / change rate consistency before proceeding.



In Cassandra, if trusted records are enabled, you should still not lose data - there is a mechanism to intercept the transaction log in case of an accidental restart, so you should not lose any data if you just reload, but intercepting the commitlog can take a while.

The steps above are part of the official upgrade process and should be the "safest" option. You can do nodetool flush + restart, this will ensure minimal trapping of the commitlog and may be faster than draining .

0


source


Is it possible to just kill the cassandra process on that node and start it again.

Essentially, yes. I assume you have a 6 knot RF 3 so this shouldn't be a big problem. If you want to do what I call a "clean shutdown", you can first run the following commands:

nodetool disablegossip
nodetool drain

      

And then (depending on your setup):



sudo service cassandra stop

      

Or:

kill `cat cassandra.pid`

      

Note that if you don't follow these steps, you should still be fine. drain

just flushes memtables to disk. If it doesn't, the commit log will be consistent with what's on disk at boot time anyway. These steps will simply speed up the download.

0


source







All Articles