2 instances of Redis: as a cache and as a persistent data store

I want to set up 2 instances of Redis because I have different requirements for the data I want to store in Redis. While I sometimes don't mind losing some data that is primarily used as cached data, I want to avoid losing some data in some cases, such as when I use Rython python, which stores jobs in Redis to execute.

I have mentioned below the basic settings to achieve this goal.

What do you think?

Did I forget something important?

1) Redis as a cache

# Snapshotting to not rebuild the whole cache if it has to restart
# Be reasonable to not decrease the performances
save 900 1
save 300 10
save 60 10000

# Define a max memory and remove less recently used keys
maxmemory X  # To define according needs
maxmemory-policy allkeys-lru
maxmemory-samples 5

# The rdb file name
dbfilename dump.rdb

# The working directory.
dir ./

# Make sure appendonly is disabled
appendonly no

      

2) Redis as persistent data store

# Disable snapshotting since we will save each request, see appendonly
save ""

# No limit in memory
# How to disable it? By not defining it in the config file?
maxmemory

# Enable appendonly
appendonly yes
appendfilename redis-aof.aof
appendfsync always # Save on each request to not lose any data
no-appendfsync-on-rewrite no

# Rewrite the AOL file, choose a good min size based on the approximate size of the DB?
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 32mb

aof-rewrite-incremental-fsync yes

aof-load-truncated yes

      

Sources:

+3


source to share


2 answers


I think your persistence options are too aggressive, but it mostly depends on the nature and volume of your data.

For cache, using an RDB is a good idea, but keep in mind that depending on the amount of data, flushing the contents of memory on disk has a cost. On my system, Redis can write memory data at 400MB / s, but note that the data may (or may not) be compressed, may (or may not) use dense data structures, so your mileage will vary. With your settings, the cache supporting heavy write will generate a dump every minute. You should check that with the volume you have, the duration of the dump is much less than this minute (roughly 6-10 seconds would be fine). In fact, I would recommend keeping only save 900 1 and deleting other saved rows. And even resetting every 15 minutes can be considered too frequent, especially if you have solid state equipment,which will gradually wear out.



For persistent storage, you need to define the dir parameter as well (since it also controls the location of the AOF file). The appendfsync option is always overkill and too slow for most purposes, unless you have very low bandwidth. You must install it on everysec. If you can't afford to lose one bit of data even in the event of a system crash, using Redis as a repository for your data isn't a good idea. Finally, you may have to adjust auto-aof-rewrite-percent and auto-aof-rewrite-min-size to the level of write throughput that the Redis instance needs to support.

+4


source


I totally agree with @Didier - this is more of an addition than a complete answer.

Note first that Redis offers configurable persistence - you can use RDB and / or AOF. While your choice of using RDB for persistent cache makes sense, I would recommend considering using both for your persistent storage. This will allow you to recover from both time-based and snapshot-based (like backups) as well as disaster recovery to the last recorded operation using AOF.



For persistent storage, you don't want to set it maxmemory

to 0 (this is the default if commented out in the conf file). When set to 0, Redis will use as much memory as the OS will give it so, after all, as your dataset grows, you will run into a situation where the OS will kill it into free memory (this often happens when you least of all expect;)). Instead, you should use a real value based on the amount of RAM your server has with enough padding for the OS. For example, if your server has 16GB of RAM, as a general rule, I would limit your use of Redis to more than 14GB.

But there is a catch. Since you've read all about the persistence of Redis, you probably remember that Redis forks write data to disk. Viking can more than double the memory consumption (forked copy + change) while the child is running, so you need to make sure your server has enough free memory to take this into account if you are using data persistence. Also note that your maxmemory calculation should take into account other potentially memory-damaging things like replication and client buffers, depending on what / how you and the application are using Redis.

+2


source







All Articles