Query update too slow on large PostgreSQL table

I'm trying to improve the speed of a simple UPDATE query, but it takes between 0.7-1.5s for one row, which is too slow.

UPDATE users SET firstname = 'test' WHERE userid=2120;

      

Here's an explanation:

Update on users  (cost=0.43..8.45 rows=1 width=331) (actual time=0.068..0.068 rows=0 loops=1)
->  Index Scan using users_pkey on users  (cost=0.43..8.45 rows=1 width=331) (actual time=0.039..0.040 rows=1 loops=1)
    Index Cond: (userid = 2120)
Trigger updatemv: time=727.372 calls=1
Total runtime: 727.487 ms

      

The total database size is 20 GB and about 60 tables. I'm having problems with tabular "users" that have 1.36 million rows. The "users" table has 36 columns (4 bigint, 5 integers, 10 characters vary (32 to 255), and the others are boolean fields), half of which are null for a large number of rows. There are also 6 indexes on the "users" table. The database is hosted on Amazon RDS db.m4.2xlarge with 8 vCPUs, 32 GB RAM, and 100 GB SSD. PostgresSQL version is 9.3.

I tried to use VACUUM ANALYZE tables and it did the trick, but it is still too slow.

I read about upgrading RAM / CPU, setting up a database in postgresql.conf, creating a split tablespace for a large table, etc. But I'm not sure what is the best approach for handling large tables with a million rows.

With the current trend my table will grow to 20 million rows over the next 12 months, so I need a solid solution.

Tips for improving the speed of UPDATE queries on large tables are helpful.

+3


source to share


2 answers


Thanks @joop I solved my problem. I had a trigger to update a materialized view. When I remove it, the update request only takes 0.123ms instead of 727.487ms, 6000x faster.



I have organized the materialized view in different ways.

0


source


Setting options in postgresql.conf can have a huge impact, and it's free, so I would start there. The default values โ€‹โ€‹are too low.



0


source







All Articles