Very slow I / O requests in Laravel

I have a laravel application that needs insert/update

thousands of records per second in a for loop. my problem is my database speed insert/update

is 100-150 per second. I increased the amount of RAM allocated for my database, but it didn't work.

enter image description here

Is there a way to increase the write speed for mysql to thousands of records per second?

provide me with optimal configurations for performance tuning

and PLEASE do not check the question. my code is correct. This is not a code issue, because I have no problem with MONGODB . but i have to use mysql.

My storage module InnoDB

+3


source to share


2 answers


For insertion, you can look at the syntax INSERT DELAYED

. This will improve the performance of the insert, but it will not help with the update and the syntax will end up being deprecated. This post offers an alternative to updates, but includes custom replication.

At one point my company was able to speed up the insertion by writing SQL to a file and then using MySQL LOAD DATA INFILE

, but I believe we found that the server command line requires installing an application mysql

.



I also found that inserting and updating into a package is often faster. So if you are calling INSERT

2k times, you might be better off using 10 attachments of 200 lines. This will reduce the blocking requirements and reduce the information / number of calls sent over the wire.

+2


source


Inserting rows one at a time and automatically assigning each statement has two overheads.

Each transaction has an overhead, possibly more than one insert. Therefore, inserting multiple rows into one transaction is a trick. This requires a code change, not a configuration change.

Each operator INSERT

has an overhead. One insert has about 90% overhead and 10% of the actual insert.

Optimal is 100-1000 rows inserted for each transaction.



For quick inserts:

  • Better LOAD DATA

    if you start with a CSV file. If you need to create a CSV file, then it is debatable whether this approach makes anything over your head loose.
  • The second best is a multi-operator INSERT

    : INSERT INTO t (a,b) VALUES (1,2), (2,3), (44,55), ...

    . I recommend 1000 per operator and COMMIT

    each statement. You will most likely need to insert over 1000 rows per second.

Another problem ... Since each index is updated as a row is added, you may have difficulty with an I / O overflow to achieve this goal. InnoDB automatically "delays" updates for non-unique secondary indexes (not needed INSERT DELAYED

), but eventually exits. (Thus, the size of the RAM innodb_buffer_pool_size

comes into play.)

If "thousands" of lines / seconds is a one-off task, you can stop here. If you expect it to be permanently "forever", there are other issues to deal with. See High Velocity Ingestion .

+1


source







All Articles