Redis cache design

I want to integrate a caching layer into my node.js API. I've never built one before, so I have a few questions.

I have objects called "containers"

I want to look at these containers at id

. There are often several containers at the same time. The search is not consistent and each user will have a different set of search IDs.

I don't need to query for data at this time. So I started by using a key / value store where the key was something like "container_1"

, with the data being a serialized json representation.

But I have to search for multiple containers at once. I noticed a hash, so now I do hmset containers [id] [serialized json]

. " So I can do hmget containers 1 3 4

to return containers 1,3,4.

Would it be better to store the hashes in redis as real objects like hmset containers:1 name test-container

? Is this an efficient or normal way to handle data? How does this strategy scale to tens or hundreds of thousands of records in terms of time complexity? Can I use a dial key expire

?

thank

+3


source to share


1 answer


There are several questions here. I will do my best to answer everyone.

It sounds like you are suggesting three possible storage scenarios. Here are a few notes on the implications of each.

Option # 1: Store each of your serialized containers in a string key

With the help, MGET

you can easily extract several containers at the same time. This parameter should be equal to the performance when storing all containers in one hash. This option will take a little more memory because top-level keys have more overhead than hash fields. You get the benefits of using a top-level key, so you can expire containers individually and use other key commands like DUMP

/ RESTORE

/ OBJECT

/ MIGRATE

in separate containers.

Parameters # 2: store all your serialized containers in one hash, each in a separate field in the hash

As you mentioned, HMGET

will allow you to load multiple containers at once. This option is slightly more memory efficient than option # 1. It also keeps your top level key space small as it doesn't grow with every container. This advantage is negligible, but it is a little administrative help as you can use the command KEYS

with less pain. It should be as fast as option # 1.

Option # 3: Store each of your containers in a hash in some unserialized format with hash fields corresponding to the container properties

If every container is a JSON object, that should render well. You still have to decide what to do if the object values ​​are not simple strings. You can still store each value as JSON or some other serialized format. This decision is perhaps the most difficult one. There can be a performance hit when trying to recreate the original container in javascript, as each property will need to be parsed independently and restored in the final object, unless the driver does it for you somewhere.



This approach can simplify and improve performance for getting specific container margins.

Fetching multiple containers in one command will be more difficult with this approach, as it would require pipelining or Lua scripting .

Conclusion

The consistency, scalability, and time complexity of each approach depends heavily on your access pattern. Are you trying to search on container properties? Then option # 3 starts to look attractive. Otherwise options # 1 and # 2 look the most attractive. Ideally, you will use additional keys to create indexes of your data for different use cases. You can have a set with container IDs owned by the user or a list with container IDs ordered by the last updated time.

All of these approaches are smart, and using additional indexes can help ensure they scale.

Can I use a dial key expire

?

Yes. You can expire any key, regardless of type. Strings, hashes, lists, sets, etc.

+6


source







All Articles