Improving CPU utilization by reorganizing nodes

We have a database located in the Northern Europe region with two AppServices nodes on Azure (Western Europe and Northern Europe). We use Traffic Manager to route traffic.

Our database and SQL repositories are located in Northern Europe.

When we launched the website, European locations were closest to our customers.

However, we have seen a shift and most of our clients are now from the US.

We have high CPU usage on our processors, although we have many instances for each.

The question arises:

Since most of our clients are from the US and it is difficult to move the database, it is better to keep the application structure as is (N. Europe and W. Europe) or create a new node in the US, but will this node still need to communicate with the database in Northern Europe?

thank

+3


source to share


2 answers


The presence of the application in the US region and the database in Europe is not recommended.

Here are some of the things you will encounter:

1) High latency, since requests for data will have to travel around the world to Europe to get this.

2) Higher resource, since generally each query that accesses the DB will take longer, this will increase memory usage while the queries are waiting on data, this will also make the load impact much more severe in the application.

3) exit from the area of ​​the intersection of the area, you will need to pay for all data moving from Europe, we have a request every time.

The best solution would be to do the following:

1) Set up a new database in our area and enable active geo-replication

At this point, you will have a hot / cold configuration where any instance can be used to read data from the database, but only the main instance can be used for write operations.



2) Create a new version of the App / App Service plan in the US region

3) Adapt your code to understand your distributed geological topology.

Your application should be able to send all reads to the "closest" region and all writes to the main database.

4) Deploy the code to all regions

5) add a new area to the TM profile

While this is not ideal as write operations may still need to go to the pond, most applications have read-to-read that is heavily skewed for reads (roughly 85% read / 15% write), so this solution works s added the benefit of giving you HA if one of the regions drops.

You may want to look in this conversation where I go over how to set up a geographically dispersed application using App Service, SQL Azure, and the technique outlined above.

+2


source


Have you considered shaping the data based on the location of your users? From a performance standpoint, this will be better, you can provide service during off-peak hours of each region. Let me recommend this article to you.



+2


source







All Articles