MongoDB Performance Insert - Huge Table with Multiple Indexes

I am testing Mongo DB for use on a huge table database with about 30 billion records of 200 bytes each. I understand that Sharding is necessary for this size, so I am trying to get 1 to 2 billion records on one machine. I reached 1 billion records on a machine with dual cpu / 6 cores and 64GB of RAM. I am mongoimport-ed with no indices and the speed was ok (14k Rec / s average). I added indexes which took a very long time, but that's ok as it's a one-off thing. Now it takes a very long time to insert new records into the database. As far as I can tell the machine won't boot when inserting records (CPU, RAM, and I / O are in good shape). How can you speed up the entry of new records?


source to share

1 answer

I would recommend adding this host to MMS ( ) - make sure you install with munin-node support and that will give you more information. This will allow you to track what might be slowing you down. Sorry, I can't be more specific in the answer, but there are many, many possible explanations here. Some general points:

  • Adding indexes means the indexes as well as your working dataset will currently be in RAM, this could strain your resources (looking for page faults)
  • Now that you have the indices, they should update when you insert - if everything fits in RAM, that should be OK, see first point.
  • You should also check your Disk IO to see how it works - what does your background flash look like?
  • Are you using the correct filesystem (XFS, ext4) and kernel version later than 2.6.25? (earlier versions have problems with fallocate ())

Some good general observation information is provided here:



All Articles