Index does not improve performance

I have a simple table structure in my postgres database:

CREATE TABLE device
(
  id bigint NOT NULL,
  version bigint NOT NULL,
  device_id character varying(255),
  date_created timestamp without time zone,
  last_updated timestamp without time zone,
  CONSTRAINT device_pkey PRIMARY KEY (id )
)

      

I often ask for data based on the deviceId column. The table has 3.5 million rows, so this leads to performance problems:

"Seq Scan on device  (cost=0.00..71792.70 rows=109 width=8) (actual time=352.725..353.445 rows=2 loops=1)"
"  Filter: ((device_id)::text = '352184052470420'::text)"
"Total runtime: 353.463 ms"

      

Hence, I created an index on the device_id column:

CREATE INDEX device_device_id_idx
  ON device
  USING btree
  (device_id );

      

My problem, however, is that the database is still using sequential scans and not index scans. The query plan after index creation is the same:

"Seq Scan on device  (cost=0.00..71786.33 rows=109 width=8) (actual time=347.133..347.508 rows=2 loops=1)"
"  Filter: ((device_id)::text = '352184052470420'::text)"
"Total runtime: 347.538 ms"

      

The query result is 2 rows, so I am not selecting most of the table. I don't understand why the index is being ignored. What can I do to improve performance?

edit:

My request:

select id from device where device_id ='357560051102491A';

      

I ran analyse

in the device table which didn't help

device_id also contains characters.

+3


source to share


3 answers


It seems that it just solves everything. I'm not sure what happened, but currently works fine. Since I posted this question, I haven't changed anything and now I am getting this query plan:

"Bitmap Heap Scan on device  (cost=5.49..426.77 rows=110 width=166)"
"  Recheck Cond: ((device_id)::text = '357560051102491'::text)"
"  ->  Bitmap Index Scan on device_device_id_idx  (cost=0.00..5.46 rows=110 width=0)"
"        Index Cond: ((device_id)::text = '357560051102491'::text)"

      

Time interval (GMT + 2 time zone):



  • ~ 15: 50 I created an index
  • ~ 16: 00 I have a dropepd and recreated the index several times since it didn't work
  • 16:05 I launched analyse device

    (did not help)
  • 16:44:49 from the app_log of the app server, I can see that the requests making the request still take 500ms
  • 16:56:59 I see the first request, which takes 23 ms (the index began to work!)

The question remains, why did it take about 1:10 hours to apply the index? When I created indexes on the same database a few days ago, the changes were immediate.

0


source


You may need to review requests. To use the index, queries must be secure. This means that certain ways of building queries are better than others. I'm not familiar with Postgre, but on SQl Server this will include things like this (very small sampling of bad constructs):

  • Don't do data transformations on the connection - store the data correctly instead
  • Don't use correlated subqueries - use views or temporary table instead
  • Don't use OR conditions - use UNION ALL instead


Your first step is to get a good performance tuning book for your specific database. It talks about which constructs to avoid for your specific database engine.

0


source


Indexes are not used when you create a column for another type:

((device_id)::text = '352184052470420'::text)

      

Instead, you can do this:

(device_id = ('352184052470420'::character varying))

      

(or perhaps you can change device_id to TEXT in the original table if you like).

Also, don't forget to run analyze device

after the index is created or it won't be used anyway.

0


source







All Articles