Create a view or new table to cache records

I am experiencing a huge performance issue in one legacy application.

There is a search form in which the user can search for records with a given value. The result line contains 10 columns. SP then returns any row containing a value in any column.

This SP uses 8 tables and some of them have about a million records. Every minute I get a new entry. This SP also paging. This SP takes approximately 40 seconds to complete.

What I did, I created a new table and put all the records in there using a query from this SP, but no conditions. When there is a new update or update in one of the original tables, I use the trigger and update this new table "cache". Now it takes only 1-3 seconds to wait for results from this new table.

Does anyone come across something like this?

One of my colleagues said that it is better to use a view, but then every time I do JOINS

.

What do you think? Is there another way?

+3


source to share


3 answers


Often, temporary tables can help you solve performance problems. One approach might be to collect only the records that need to be considered in the temporary tables, and then create your final select statement from the temporary tables linked to any other tables that you don't filter.

As an example, let's say that one of the fields you're looking for is field1

in table1

. Start by inserting #table1

only records into the table that have the value field1

you are looking for:

select PrimaryKeyTable1, Field1, Field2, Field3, etc...
into #table1
from table1
where Field1 = 'Whatever you are looking for'

      

This should be pretty fast even for large tables, especially if you have an index on field1

. You do this for each table with lookup fields to collect all records that have matching records that you are looking for.

Then, you also need to make sure to insert any records into your temporary tables that may have foreign key references to any of your other temporary tables. So, let's say you also built a table #table2

with the above method, which has a foreign key for table1

called PrimaryKeyTable1

. You should insert these records, for example:

Insert into #table1
    (PrimaryKeyTable1, Field1, Field2, Field3, etc...)
select table1.PrimaryKeyTable1, table1.Field1, table1.Field2, table1.Field3, etc...
from table1
join #table2
on table1.PrimaryKeyTable1 = table2.PrimaryKeyTable1
where table1.PrimaryKeyTable1 not in
    (Select PrimaryKeyTable1 from #table1)

      



You will now also have records in #table1

that match records in #table2

that contains records that match your search criteria. You do this for all temporary tables that have matching foreign keys. The order you insert matters; make sure you don't refer to any temporary tables until after the last insert statement when collecting foreign key related records.

Then you can simply make your final select statement by replacing the actual tables with the temporary tables you created and remove any filters that are looking for your field data. There may be other optimizations depending on the structure of your query, but this is a general idea.

If you have already examined all your indexing parameters and it still does not help, MS SQL Server has "Change Tracking" functions that can be useful to you when building your cache table. You activate the database for change tracking and configure which tables you want to track. SQL Server then creates change records on every update, insert, delete on the table, and then requests changes to the records made since the last check. This is very useful for synchronizing changes and is more efficient than using triggers. It's also easier to manage than creating your own tracking tables. This has been a feature since SQL Server 2005.

How to use SQL Server Change Tracking

Change tracking only tracks the primary keys of tables and allows you to query which fields can be changed. Then you can query the table joins on those keys to get the current data. If you want it to capture data, you can also use Change Capture, but it requires additional overhead and at least an enterprise version of SQL Server 2008.

Change data collection

+3


source


Your solution is a reliable way to do what is called "indexed view" or "materialized view" in Oracle in Microsoft SQL Server.

Basically you are right - it is faster to navigate through one indexed table and then through dozens that are constantly updating.



You should really try to create an index view (some start here https://technet.microsoft.com/en-us/library/dd171921(v=sql.100).aspx ) and this will probably solve all your performance issues ...

+2


source


You can use schema binding View and create a cluster index on view.it will save your view data physically. But after creating the schema binding view, you cannot modify the table.

+1


source







All Articles