Cassandra TimeOut?

I have a large dataset in one table (over 2 million rows, each with more than 100 columns) stored in cassandra a few months ago (maybe 2)? I was able to execute a simple command to keep track of the number of records in this table:

SELECT count(*) from mydata limit 11111111;

      

Several days ago I tried the same command and got the following error:

errors={}, last_host=168.176.61.25

      

The error itself doesn't say much. After doing some research on google, I think it might be due to the timeout. As you can expect short queries to execute correctly and the error always appears after 10 seconds of processing.

From what I understand the timeouts for cassandra are set in cassandra.yaml, I changed the following values

read_request_timeout_in_ms: 25000

range_request_timeout_in_ms: 25000

request_timeout_in_ms: 25000

However, there is no change at all in the error and it still fails after the same 10 seconds.

Any ideas?

Thank you so much

Fuanka

+3


source to share


1 answer


If you only want to count the number of records, don't use count (*), put the column-column in your schema: http://www.datastax.com/documentation/cql/3.0/cql/cql_using/use_counter_t.html



if you also need to get all the data for another operation, there are several repetitions of this timeout, I can provide you with some if you need.

+1


source







All Articles