Best way to create a table based on reservation

One of my clients has a reservation based system. Like air lines. Works on MS SQL 2005.

The way the previous company was designed is to create the distribution as a set of rows.

Simple example:

AllocationId | SeatNumber | IsSold

1234         | A01        | 0

1234         | A02        | 0

      

In the process of selling space, the system will set an update lock on the table.

We have a problem at a time when the blocking process is slow and we are looking at ways to speed it up.

The table is already indexed efficiently, so we are looking at a hardware solution to speed up the process. The table contains about 5 million active rows and is on a SAS RAID 50 array.

My guess is that hard drive seek time will be the limiting factor in speeding up update locks when you have 5mil lines and update 2-5 lines at a time (I could be wrong).

I have a herd of people using an index partition on multiple disk arrays, has anyone had a similar experience trying to speed up locking? Can anyone give me some advice on a possible solution on what hardware can be updated or what technology can we use to speed up update locks (without going into a cluster)?

+2


source to share


5 answers


I don't think you will get anything from table partitioning - the only improvement you will get is less than disk reads from smaller (shorter) index trees (each read will hit each index level at least once so the fewer levels the faster it reads.) However, I have a table with a 4M + row partition indexed into 4 columns, key length 10 bytes. It corresponds to three levels of the index, with the maximum level being 42.6%. Assuming you have something similar, it seems reasonable that splitting can only remove one level from the tree, and I doubt much of an improvement.

Some of the considerations of a solid idea:

Raid 5 (and 50) may be slower on write due to parity computation. Not a problem (or so I'm told) if the disk I / O cache is large enough to handle the workload, but if it's flooded you can watch raid 10.



Divide the table into multiple disk arrays. Take two (or more) Raid arrays, spread the table across volumes [files / filegroups, with or without partitioning into tables), and you've doubled the disk I / O rate, depending on where the data resides in relation to to requests that receive it. (If everyone in array # 1 and array # 2 is idle, you got nothing.)

In the worst case, there might be an advantage of cutting edge or bleeding technology that will remove your socks. If this is critical to your business and you have a budget, it might be worth some serious research.

0


source


How long is the hold refresh lock for ?

Why is the table locking "not only? rows ?

If the lock is being held for more than a fraction of a second, that is likely to be your problem. SqlServer is not how you hold the locks while users fill out web forms, etc.



With SqlServer, you have to implement the shopping cart yourself, temporarily reserving space until the user pays for it. For example add "IsReserved" and "ReservedAt" colunn, then any seats that have been reserved for more than n minutes should be automatically disabled.

This is a difficult problem as the buyer does not expect the place that is being sold to be sold to someone else where he checks. However, you do not know if the buyer will ever be able to complete the checkout. So how do you display it in the UI. Consider looking at what other booking websites are doing, then copy those that your users already know how to use.

(Oracle can sometimes cope with a lock held for a long time, but even Oracle is faster and happier if you keep your lock short.)

0


source


I would first try to figure out why you are locking a table and not just a row. One thing to check is the Execution plan of the Update statement to see which indexes it calls will be updated, and then make sure that row_level_lock and page_level_lock are enabled for those indexes. You can do this with the following statement.

Select allow_row_locks, allow_page_locks from sys.indexes where name = 'IndexNameHere'

      

0


source


One last try ...

It is clear that too many locks are held for too long.

Once the system starts to slow down due to too many locks, no more transactions are sent.

Therefore, you should check the system to find out the optimal number of currant transactions , and then use some queuing system (or otherwise) to limit the number of currant transactions. Sql server might need some tweaking (number of active connections, etc.), otherwise you will have to write this in your application code.

Oracle is good at resolving reads bypass records , however SqlServer is not that standard ...

So I would split the stored proc to use two transactions, the first transaction should have simply:

  • be transaction SNAPSHOT (or READ UNCOMMITTED)
  • find the "id" lines for the places you want to sell.
  • Then you must complete (or abort) this transaction,

and use a second (hopefully very short) transaction which

  • Most likely READ COMMITTED (or possibly SERIALIZABLE)
  • Selects each row for update (use lock hint )
  • Check that it has not been sold on average (abort and start again, if any)
  • Set the "IsSold" flag in the line

(You can use the above in a single update statement using "in" and then check if the expected number of rows is updated)

Sorry, sometimes you need to understand what the transaction does each time and how the locking works in detail.

If the table is smaller, then the update is shorter and hold locks for less time.

Therefore, consider splitting the table:

  • so you have a table where JUST contains "AllocationId" and "IsSold".
  • This table can be stored as one btree (index of the organized table on AllocationId)
  • Since all other indexes will be displayed on the table, which is contrary to the details of the place, no indexes should be blocked by updating.
0


source


Here are some ideas:

  • Make sure your data and logs are on separate spindles to maximize write performance.
  • Configure your drives to use only the first 30% or so for data, and the rest for backups (minimize seek / random access times).
  • Use RAID 10 for log size; add more spindles as needed for performance (write performance depends on log speed)
  • Make sure your server has enough RAM. Ideally, everything needed for a transaction should be in memory before the transaction begins to minimize blocking time (see Pre-caching). There are tons of performance counters you can check for this.
  • Separating can help, but it depends on the details of your application and data ...

I am assuming T-SQL, indexes, transaction size, etc. already optimized.

In case it helps, I'll cover the subject in detail in my book (including SSD, disk array optimization, etc.) - Ultra-Fast ASP.NET .

0


source







All Articles