Redis Memory Optimization and Eviction Policies
Problem Description: As a high-performance in-memory database, Redis's memory efficiency directly impacts system performance and cost. When memory is insufficient, how does Redis choose which data to delete? What are the differences between various memory eviction policies? How should they be selected and configured in practical applications?
Detailed Explanation:
Redis memory optimization is a systematic project, where eviction policies are a core mechanism used to decide which data should be deleted to free up space when memory is insufficient.
Step 1: Understanding Redis Memory Usage Monitoring
Before discussing eviction policies, it's essential to learn how to view Redis's memory usage.
-
Use the
INFO MEMORYcommand: This is the most direct diagnostic tool. Key metrics include:used_memory: The total memory allocated by the Redis allocator, i.e., the actual memory used to store data.used_memory_human:used_memorydisplayed in a human-readable format.used_memory_rss: From the operating system's perspective, the physical memory size occupied by the Redis process. This value is usually larger thanused_memorybecause it includes memory fragmentation and other overhead required for the process itself.mem_fragmentation_ratio: The memory fragmentation ratio, calculated asused_memory_rss / used_memory. A value greater than 1 indicates fragmentation. A value around 1.5 is generally acceptable. Values significantly greater than 1 (e.g., >2) or less than 1 (indicating Swap usage) require attention.maxmemory: The maximum available memory limit for Redis set in the configuration file. Eviction policies take effect whenused_memoryapproaches this value.
-
Set the memory limit: In the Redis configuration file (redis.conf), set the maximum memory using the
maxmemory <bytes>parameter. For example,maxmemory 2gb. It is strongly recommended to set this value in production environments to prevent Redis from using unlimited memory and causing system crashes.
Step 2: Understanding Redis Eviction Policies
When used_memory reaches maxmemory, Redis deletes data according to the configured eviction policy, set via maxmemory-policy.
Policies are mainly divided into three categories:
1. No Eviction, Return Error
noeviction(Default Policy): When memory is insufficient to accommodate new writes, new write operations will return an error (e.g., the SET command returns(error) OOM command not allowed when used memory > 'maxmemory'). Read requests and DEL requests can continue to execute.- Applicable Scenarios: Situations with extremely high data consistency requirements where no data loss is allowed. This essentially shifts the memory pressure to the application.
2. Evict from All Keys (Ignoring TTL)
These policies evict keys from the entire keyspace (including keys with and without expiration times) based on specific rules.
allkeys-lru: Evicts data using the LRU (Least Recently Used) algorithm. It evicts the key that has been unused for the longest time.allkeys-lfu: Evicts data using the LFU (Least Frequently Used) algorithm. It evicts the key with the lowest usage frequency over a period. (Introduced in Redis 4.0)allkeys-random: Randomly evicts a key.
3. Evict Only from Keys with Expiration Time Set
These policies only evict data from keys that have a Time-To-Live (TTL) set via commands like EXPIRE. They never evict keys without an expiration time set.
volatile-lru: Evicts from keys with an expiration time set, using the LRU algorithm.volatile-lfu: Evicts from keys with an expiration time set, using the LFU algorithm.volatile-random: Randomly evicts a key from those with an expiration time set.volatile-ttl: Evicts from keys with an expiration time set, choosing the key with the shortest remaining TTL, i.e., the one that will expire soonest.
Step 3: Deep Dive into LRU and LFU Algorithms
Redis's LRU/LFU are not strict implementations but approximate algorithms based on sampling, balancing performance and accuracy.
-
Approximate LRU:
- Problem: A strict LRU requires maintaining a linked list of all keys, moving keys on every access, which is costly.
- Redis Implementation: When eviction is needed, Redis randomly samples a batch of keys (5 by default, configurable via
maxmemory-samples) and places them in a candidate pool. It then evicts the least recently used key from this pool. - Effect: Larger sample sizes yield results closer to strict LRU but increase CPU consumption. The default value of 5 usually works well.
-
LFU:
- Idea: LRU only cares about access time, but a key might have been frequently accessed long ago and is not active recently. LFU focuses more on access frequency, able to evict keys with insufficient "hotness".
- Redis Implementation: It maintains a counter for each key. The counter decays over time (preventing old data from dominating forever) and has a probabilistic element in access patterns (avoiding sudden spikes). It also selects eviction targets via sampling.
Step 4: How to Choose an Eviction Policy? – Decision Flow
Selecting a policy involves trade-offs. Follow this decision tree:
-
Can data loss be tolerated?
- No -> Choose
noeviction. Ensures data safety but requires the application layer to handle memory monitoring and write operation exceptions.
- No -> Choose
-
If partial data loss is acceptable, ask: Are there critical permanent data that must not be lost?
- Yes -> These critical data should not have an expiration time set. Then, choose a
volatile-*series policy. This ensures eviction only occurs among keys with TTL, protecting permanent keys. - No (All data are of similar importance, or all can be lost) -> Choose an
allkeys-*series policy. This allows the entire keyspace to be the eviction pool, maximizing memory utilization efficiency.
- Yes -> These critical data should not have an expiration time set. Then, choose a
-
Within
volatile-*orallkeys-*, how to choose between LRU/LFU/TTL/Random?- Access pattern shows clear hotspots (e.g., 80/20 rule) -> Prefer
allkeys-lruorvolatile-lru. This is the most common choice. - Access frequency better reflects data value than access time (e.g., needing to evict data accessed occasionally but with very low frequency) -> Choose
allkeys-lfuorvolatile-lfu. - Data lifecycle is very clear, wanting to automatically evict soon-to-expire data -> Choose
volatile-ttl. - Data access distribution is very uniform, with no obvious pattern -> Choose
allkeys-randomorvolatile-random.
- Access pattern shows clear hotspots (e.g., 80/20 rule) -> Prefer
Common Production Environment Combinations:
- General Scenario:
maxmemory-policy allkeys-lru - Permanent Keys + Hotspot Access:
maxmemory-policy volatile-lru, ensuring permanent keys have no TTL set.
Step 5: Configuration and Verification
-
Configuration: Modify the following two lines in the
redis.conffile, then restart Redis or dynamically set them using theCONFIG SETcommand.maxmemory 2gb maxmemory-policy allkeys-lru -
Verification: Use the
INFO MEMORYcommand to check the values ofmaxmemoryandmaxmemory_policyto confirm the configuration is active. You can also directly view the current policy usingCONFIG GET maxmemory-policy.
Summary
Eviction policies are a crucial design choice in Redis memory optimization. You need to:
- Clarify the business's tolerance for data inconsistency.
- Analyze the data access pattern (presence of hotspots, dependency on frequency).
- Distinguish data importance (need to protect permanent data).
- Correctly configure the
maxmemoryandmaxmemory-policyparameters.
By following these steps, you can select the most suitable eviction policy for your application, finding the optimal balance between performance and cost.