Cassandra from memory heap

I have 4 cassandra nodes in a cluster and one column family that has 10 columns where the row cannot grow very wide (maybe 1000 columns).

I have a "peak" record where I insert up to 500,000 records in a range of 5-10 minutes.

I am using node.js driver: node-cassandra-cql

3 nodes work fine, but one node crashes every time on heavy write.

Currently all nodes are around 1.5 GB in size and the problem node has a data size of 1.9 GB.

All nodes have a maximum heap space of 1 GB (I have 4 GB of RAM on the machines, so the cassandra config file calculated this heap amount by default)

I am using the default cassandra config, except I increased the write / read timeouts.

Question : Does anyone know what could be causing this? Is the heap size really that small? What and how to set up a cassandra cluster for this use case (heavy writing on a small time interval and at other times actually does nothing, or just a small write)

I have not tried to increase the heap size manually, at first I would like to know, maybe there is something else to tweak instead, just increasing it.

+3
node.js cassandra nosql bigdata


source to share


No one has answered this question yet

Check out similar questions:

619
MongoDB vs Kassandra
32
Cassandra has a limit of 2 billion cells per partition, but which partition?
6
Tuning write performance in cassandra
1
Reducing Cassandra 1.1.x heap usage
1
Cassandra and heap size
1
Increasing used space when rebalancing a Cassandra cluster
1
Impact of compacting and flushing on write latency in Cassandra
1
Cassandra is running out of memory for cql queries
0
Cassandra Non Heap Memory Usage
0
Cassandra ran out of memory



All Articles
Loading...
X
Show
Funny
Dev
Pics