Can SQL Server 2008 handle 300 transactions per second?
My current DB project is SQL 2005 and the load is about 35 transactions per second. The customer expects more business and plans 300 transactions per second. Nowadays, even with good infrastructure, DBs have performance issues. A typical transaction will contain at least one update / insert and a couple of selections.
You guys have worked on any systems that handled over 300 tx / s in SQL 2005 or 2008, if so what infrastructure did you use, how complex were the transactions? Please share your experience. Someone already suggested using Teradata and I want to know if it is really necessary or not. Not my job exactly, but curious how much SQL can handle.
source to share
According to tcp.org its possible for SQL Server 2005 to get 1,379 transactions per second. Here is a link to the system that did it. (There are SQL Server based systems on this site that have a lot more transactions ... the one I linked to was only the first one I looked at).
Of course, as Kragen said, whether these results can be achieved is impossible, for anyone here to say.
source to share
The infrastructure needs for high performance SQL servers can be very different than your current structure.
But if you are having problems, it is very possible that the bulk of your problem is due to poor database design and poor query design. There are many ways to write poorly performing queries. In a highly transactional system, you cannot afford any of these. No choice *, no cursors, no uncorrelated subqueries, no poorly performed functionality, no suggestions that cannot be resolved or enforced.
The very first thing I would suggest is to get some books on setting up Peroformance SQl Server and read them. Then you will know where your system problems may be and how to identify them.
Interesting article: http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx
source to share