How do I avoid this blocking situation in .NET 3.0 / WinForms / SQLServer 2005?
Appendix 1 -
- Opens SqlConnection and SqlTransaction against a SQLServer 2005 database.
- Inserts a record into table 1
- Doesn't execute or rollback SqlTransaction - purposefully supports this to demonstrate / describe the problem.
Appendix 2 -
- Opens a SqlConnection and SqlTransaction against a SqlServer 2005 database.
- Tries to run this query - "SELECT COUNT (Id) FROM Table1"
Table1 - Identity is the Identity field. Name is a varchar field. There are no other fields in the table
Application 2 cannot execute the "SELECT ..." query. Table 1 seems to be locked or locked by the insert made in Appendix 1.
While the above scenario is fictional, it demonstrates the problem we are adequately facing. We want to be able to run a long SqlTransaction execution (maybe hours) and do a lot of attachments / updates through this SqlTransaction.
We are developing a data transformation application that needs to do a lot of processing on a large amount of data before it can be inserted / updated in the database. The data transformation needs to happen when we have our main WebForms based application running on the same SQLServer 2005 database that we want to do a long running transaction.
All tables in our application are segmented by the ClientID field from the ClientMaster table. For example, if we have a CollegeMaster table, then it will have a ClientID field as part of the primary key and an ID field to identify it itself. Data transformation begins by creating a new ClientID, and the new ClientID is used in all other tables.
Ideally, all requests like those mentioned in Appendix 2 should not depend on a long-running transaction. These requests should only read / use data that has already committed and continues to work, and not block due to a long-running transaction. What can annex 1 do to ensure this is achieved?
source to share
You might want to look into this Sql 2005 feature. It looks like it might help you. Its a new locking mechanism that you have to enable in the DB but seems to have much less locking.
http://msdn.microsoft.com/en-us/library/ms177404(SQL.90).aspx
source to share
I recommend not having long transactions; however, saying:
You can lower your transaction isolation level using hints. I usually don't recommend this practice, but if your options were to do:
select count (id) from table 1 (NOLOCK), you essentially skip all locks in the table; however, be warned that you can, and end up with dirty reads, phantom reads (where the data is one minute but not the next). If your requests are segmented, you should be fine. There are also other tips that you can look up in books on the Internet.
Another option is to do all of your lengthy processing in staging tables and then make one final copy / paste into table1. This will help keep the length of the transaction.
source to share
The rationale for a long-term transaction for data transformation is that SQLServer 2005 already has rollback functionality. In case of problems with data transformation, we can use this tool to rollback the inserted / updated data.
The rationale for staging tables is that we have many identity fields that would be difficult to track in a parallel situation. By converting from the actual source tables to staging tables, we will create one set of identifier values ββin the "master" tables that will be used later in the "child" staging tables. Then, by pushing the data from the staging tables into the target tables, we need to make sure that the new identity values ββthat are generated for the "main" tables are mapped and allocated appropriately to the "child" tables.
source to share