MySQL lock timeout and deadlock error

I am developing a mobile application whose backend is developed in Java and the database is MySQL.

We have some insert and update operations on database tables with a lot of rows (between 400,000 and 3,000,000). Each operation does not usually need to touch every register in the table, but it is possible that they are called simultaneously to update 20% of them.

Sometimes I get the following errors:

Deadlock found when trying to get lock; try restarting transaction

and

Lock wait timeout exceeded; try restarting transaction

I have improved my queries by making them smaller and faster, but I still have a big problem where some operations cannot be performed.

My solutions so far have been:

  • Increase server performance (AWS instance from m2.large to c3.2xlarge)
  • SET GLOBAL tx_isolation = 'READ-COMMITTED';

  • Avoid foreign key validation: SET FOREIGN_KEY_CHECKS = 0;

    ( I know this is unsafe, but my affinity is not in database locking )
  • Set these values ​​for timeout ( SHOW VARIABLES LIKE '%timeout%';

    ) variables :
    • connect_timeout

      : ten
    • delayed_insert_timeout

      : 300
    • innodb_lock_wait_timeout

      : 50
    • innodb_rollback_on_timeout

      : OFF
    • interactive_timeout

      : 28800
    • lock_wait_timeout

      : 31536000
    • net_read_timeout

      : thirty
    • net_write_timeout

      : 60
    • slave_net_timeout

      : 3600
    • wait_timeout

      : 28800

But I'm not sure if these things have reduced performance.

Any idea on how to mitigate these errors?

Note: these other SO answers don't help me:

MySQL lock timeout exceeded

MySQL: "lock wait wait exceeded"

How to change the default Mysql connection timeout when connecting via python?

+3


source to share


2 answers


Try to update fewer rows in one transaction.

Instead of updating 20% ​​or rows in one transaction update 1% of rows 20 times.



This will greatly improve your actions and avoid timeouts.

Note: ORM is not a good solution for large updates. Better to use standard JDBC. Use ORM to fetch, update, delete multiple records every time. This speeds up the coding phase rather than the execution time.

0


source


As a comment, more than an answer, if you are in the early stages of development, you might be wondering if you really need this particular data in a relational database. There are much faster and larger alternatives for storing data from mobile applications depending on the intended data use. [S3 for large files, stored once, read frequently (and can be cached); NoSQL (Mongo, etc.) For unstructured large, once, read, etc.]



0


source







All Articles