Service-Oriented Architecture Caching

In a distributed system environment, we have a RESTful service that needs to provide high read throughput at low latency. Due to limitations in database technology and given its high durability system, we decided to use MemCached. Now SOA has at least 2 options for cache location, basically the client looks in the cache before calling the server and the client always calls the server that looks for the cache. In both cases, caching is done in a distributed MemCached server.

Option 1: Client -> RESTful Service -> MemCached -> Database

OR

Option 2: Client -> MemCached -> RESTful Service -> Database

I have an opinion, but I would love to hear the pros and cons of any option from the SOA experts in the community. Suppose any of the options is possible, it's an architectural issue. Appreciate your experience.

+3


source to share


5 answers


I have seen

Option 1: Client -> RESTful Service -> Cache Server -> Database



works very well. The pros IMHO is that you can work and use this layer in such a way that you can "release" some of the load on the DB. Assuming your end users might have a lot of similar requests, and after the Client can decide what storage needs to be reserved for caching. Also how often to cleanse it.

+1


source


I prefer option 1 and I am currently using it. This way it is easier to control the load on the database (just like @ekostatinov mentioned). I have a lot of data that is required for every user in the system, but the data never changes (like some system rules, item types, etc.). This really reduces the load on the DB. This way you can also control the behavior of the cache (for example, when clearing items).



+1


source


Option 1 is the preferred option because it makes memcache a service implementation detail. another option means that if the business changes and everything cannot be stored in the cache (or other, etc.), customers will have to change. Option 1 hides everything behind the service interface. Additionally, Option 1 allows you to develop the service as you see fit. for example maybe later you think you need a new technology, maybe you solve a performance problem with a DB, etc. Again, Option 1 allows you to make all these changes without dragging and dropping clients into a mess.

+1


source


Whether the REST API is accessible to external consumers. In this case, the consumer must decide if they want to use the cache and how much stale data they can use.

As with the ful REST service, the service is a container of business logic and is a data proxy, so it decides how much to cache, how long the cache expires, when to flush, etc. A client consuming a REST service always assumes that the service is providing the latest data to it. And therefore option 1 is preferred.

Who is the client in this case? It is a wrapper around your REST API. You provide both a customer and a service.

+1


source


I can share my experience with Enduro / X middleware . For local XATMI service calls, any client process connects to shared memory (LMDB) and checks the result. If the response persists, it returns data directly from shm. If no data is available, the client process takes the longest path and performs IPC. In the case of REST access, the network clients still make the HTTP call, but the HTTP server, as a XATMI client, returns data from the shared mem. In real life, this method significantly boosted a web frontend web application that consisted of middleware via REST calls.

0


source







All Articles