SQL Server - Best Alternative to Improve Performance of Long Long Transaction with Many Inserts?

I have a scenario where a user action on the screen results in new records being created in 50 different tables in real time. The design of the use case is such that new records created as a result of user action are required immediately for the user to make changes. So there is no offline or delayed creation option.

Having said that, the obvious problem is that the insert statements (along with some additional manipulation operations) are inside a transaction, making it a very time consuming transaction. This takes about 30 seconds and often times out or blocks other requests.

Atomicity requires a transaction. Is there a better way to split the transaction and keep it safe? Or are there any other ways to improve the current situation?

+3


source to share


2 answers


Insert queries are waiting for other (most often) queries that are in parallel at this moment

You should use a version based isolation level , aka version based isolation level . SNAPSHOT because, depending on the row-based isolation level, read versions do not block writes and writes do not block reads. I would start by including READ_COMMITTED_SNAPSHOT and test with this:



ALTER DATABASE [...] SET READ_COMMITTED_SNAPSHOT ON;

      

I recommend reading the article linked for an explanation of the implications and tradeoffs implied by the string version.

+4


source


Based on the exchange of comments, I believe that you should consider both insert transaction and parallel queries at the same time. You want to accommodate your load without losing transaction integrity. Optimization techniques available include:

  • Add access indexes when you notice slow constructs (for example, nested loops) on large datasets in execution plans for frequently viewed or slow running queries.

  • Adding coverage indices. These indexes contain additional columns in addition to the lookup columns, and they allow a particular query to avoid traveling to the table altogether. This is especially effective when the table is wide and the coverage index is narrow, but it can also be used to prevent locking problems between UPDATE and SELECT on different columns of the same rows.

  • Denormalization

    ... For example, switching some queries to access indexed views versus physical tables or secondary tables loaded by triggers when the primary tables are updated. These are costly and two-way methods and should only be considered to address proven upper bottlenecks.



Make only those changes in which the measured acceleration is very high, as none of these methods are free from a performance standpoint. Never optimize without taking performance measurements at every step.

This is trivial, but let me mention it for completeness - update your statistics ( ANALYZE

, UPDATE STATISTICS

, ..., in accordance with your database engine) as the analysis of the implementation plans, as well as in production use.

+1


source







All Articles