New Azure SQL Database Services, how scalable and what DTUs

The new new Azure SQL Database Services looks good. However, I am trying to figure out how scalable they are.

So, for example, suppose there are 200 concurrent user systems.

For standard

Workgroup and cloud applications with "multiple" concurrent transactions

      

For Premium

Mission-critical, high transactional volume with "many" concurrent users

      

What do plural and many mean?

Also Standard / S1 offers 15 DTUs, while Standard / S2 offers 50 DTUs. What does it mean?

Going back to my 200 users example, which option should I do?

Azure SQL Database Link

thank

EDIT

Helpful page on definitions

However, what are "maximum sessions"? Is this the number of concurrent connections?

+3


source to share


4 answers


It's hard to tell without doing a test. By 200 users, I assume you mean 200 people sitting at their computer while doing business, not 200 users who sign up twice a day. S2 allows 49 transactions per second, which sounds right, but you need to test. Also, a lot of caching can't hurt.



+2


source


Azure SQL Database has some great MSDN articles, in particular they have a great starting point for DTU. http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx and http://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-Klein- Video-02

In short, it's a way to understand the resources that provide each level of performance. One of the things we know when talking to Azure SQL Database customers is that they are a diverse group. Some are most comfortable with the most absolute details, cores, memory, IOPS, and others with a much more summary level of information. No one size fits all. DTU is for this later group.

Regardless, one of the benefits of the cloud is that it's easy to start with one service level and performance level and iteration. In Azure SQL Database, you can change the performance level while the application is running. During the change, it is usually less than a second of elapsed time when the DB connections are dropped. The internal workflow in our service for moving the DB out of the service / performance level follows the same pattern as the workflow for dropping nodes in our data centers. And nodes fail all the time regardless of service level. In other words, you shouldn't notice any difference in this regard relative to your past experiences.



If DTU isn't your thing, we also have a more detailed workload that you might like. http://msdn.microsoft.com/en-us/library/azure/dn741327.aspx

Thank you boy

+4


source


Check out the new Elastic DB Offering (Preview) announced today at Build. The page has been updated with elastic pricing information.

0


source


DTUs are based on a mixed measurement of CPU, memory, reads and writes. As the DTU increases, the power offered by the performance level increases. Azure has different limits on concurrent connections, memory, I / O and CPU usage. Which level one should choose really depends on

  • # ordinary users
  • Registration speed
  • IO rate
  • CPU usage
  • Database size

For example, if you are developing a system where multiple users are reading and there are only a few authors, and if the middle tier of your application can cache data as much as possible, and only selective requests / restarts of the application hit the database, then you might not worry too much about the use of IO and CPU.

If many users get into the database at the same time, you can click on concurrent connection limit and the requests will be throttled. If you can control user requests going into the database in your application, this shouldn't be a problem.

Write speed: depends on the volume of data changes (including additional data pumping in the system). I have seen an application that is constantly pumping data and data being pumped at the same time. Choosing the right DTU again depends on how you can do end-of-application throttling and get a consistent speed.

Database size: Basic, Standard, and Premium have different maximum size limits, which is another deciding factor. Using the compression capabilities of table compression helps to reduce overall size and therefore overall IO.

Memory: Setting up advanced queries (joins, sorts, etc.), enabling lock / nolock scans helps manage memory usage.

The most common mistake people usually make with database systems is growing their database rather than tweaking their queries and application logic. So testing, controlling resources / requests with different DTU constraints is the best way to deal with it.

If you choose the wrong DTU, don't worry, you can always scale up / down in SQL DB and work completely online

Also if strong reason does not carry over to V12 to get even better performance and functions.

0


source







All Articles