Performance difference between Azure Redis cache and inference cache

We are moving asp.net site to Azure Web Role and Azure Sql Database. The site uses an output cache and a regular cache [xxx] (ie HttpRuntime.Cache). They are now stored in the classic way in the memory of the web role instance.

Low hanging fruit is to start using the distributed cache first for output caching. I can use the cache role in the role, either as co-location, with a dedicated cache role, or in the Redis cache. Both have finished products.

Are there performance differences between these two methods (the ones with co-located / dedicated) cache?

One thing to consider is that fetching a page from Redis for each pageload on each server will be faster or slower than compiling a page from scratch on each server every 120 seconds, but in between just getting it from local memory?

What will scale better if we want to start caching our own data (like pocos) in a distributed cache instead of HttpRuntime.Cache?

-Mathias

+3


source to share


1 answer


Answering each of your questions individually:

Are there performance differences between these two (co-located / dedicated) caching methods?

A specific co-located caching solution is faster than a dedicated cache server as the co-location request / inproc will be handled locally within the process, where the network fetch will be used as the dedicated cache solution. However, since the data will be in memory on the cache server, fetching will still be faster than fetching from the DB.

One thing to consider is getting a page from Redis for every pageload on every server will be faster or slower than composing a page from scratch every every server every 120 seconds, but in between just getting it from local memory?

It will depend on the number of objects on the page, i.e. time taken to create the page from scratch. Although getting from cache will include network outage time, but mostly in fractions of a millisecond.

What will scale better when we want to start caching our own data (i.e. pocos) in a distributed cache instead of HttpRuntime.Cache?



Since HttpRuntime.Cache is in-process caching, it is limited to one process memory, so it does not scale. Distributed Cache, on the other hand, is a scalable solution where you can always add more servers to increase the size and bandwidth of the cache. Also the out-of-process nature of the distributed cache solution allows access to cached data on an application process that will be used by any other process.

You can also look at NCache for Azure for a distributed caching solution. NCache is native .Net distributed caching.

In the following Iqbal Khan blogs, you can better understand the need for a distributed cache for ASP.Net applications:

Hope this helps :-)

-Sameer

+9


source







All Articles