Mongodb - make inmemory or use cache

I will create a 5 node mongodb cluster. It will be more difficult to read than write, and the question arose which design would bring the best performance. These nodes will only be dedicated to mongodb. For example, let's say each node will have 64GB of RAM.

From the mongodb docs it says:

MongoDB automatically uses all free memory on the computer as cache

Does this mean that as long as my data is less than the available RAM it will be like having a database in memory?

I also read that it is possible to implement mongodb exclusively in memory

http://edgystuff.tumblr.com/post/49304254688/how-to-use-mongodb-as-a-pure-in-memory-db-redis

If my data was fairly dynamic (can vary from 50 to 75 gb every few hours), in theory it would be better to design mongodb in such a way that mongodb can manage its cache (mongo default setting) or bring mongodb into memory initially, and if data grows by ram size, use swap space (SSD)?

+3


source to share


1 answer


MongoDB's default storage engine maps files in memory. It provides an efficient way to access data while avoiding double caching (i.e. MongoDB cache is actually OS page cache).

Does this mean that as long as my data is less than the available RAM it will be like having a database in memory?

For traffic reading, yes. For traffic recording this is different as MongoDB may have to record the write operation (depending on configuration) and maintain an oplog.



Is it better to start MongoDB from memory only (using tmpfs)?

For reading traffic, it doesn't have to be any better. Placing files on tmpfs will also avoid double caching (which is good), but the data can still be dumped. Using a regular file system will be just as fast as soon as the data is loaded.

For traffic recording, this is faster if the log and oplog are also placed on tmpfs. Please note that in this case, a system failure will result in a complete loss of data. The performance gains are usually not worth the risk.

+1


source







All Articles