One of the key aspects of running Redis smoothly is managing memory usage through effective Redis eviction policies. Since Redis stores data in memory, it needs smart ways to remove older data once memory limits are hit.
This comprehensive guide will cover everything you need to know about Redis eviction including:
- The basics of eviction and maximum memory
- Redis eviction algorithms like LRU, LFU, and TTL
- Configuring and selecting eviction policies
- Tuning for different data access patterns
- Potential pitfalls and mistakes to avoid
- Simulating evictions with debug commands
- Extending eviction with Active Expiry
- Testing eviction behavior and performance
- Optimizing memory for your workload
- Alternative approaches beyond eviction
Learning to maximize memory utilization through tailored eviction strategies unlocks the full power of Redis!
Understanding Redis Eviction Basics
Since Redis is an in-memory data store, the OS will start swapping once Redis consumes all available system memory. This leads to severe performance degradation negating Redis’ speed advantage.
That’s why Redis allows setting a maxmemory limit to stay within a memory budget. Once this limit is reached, Redis cannot allocate more memory and needs to make space by removing existing keys. This automatic removal of keys is called eviction or expiry.
Redis provides several smart eviction algorithms to serve different use cases. But before we cover them, understanding how maxmemory works is crucial.
Setting the maxmemory Parameter
The core config that controls Redis memory usage is:
It specifies the maximum memory that Redis can use before starting eviction. You want it set below total system memory to avoid swapping.
Two other related configurations help complete memory management:
maxmemory-policy - Eviction algorithm like LRU
maxmemory-samples - Precision for LRU approximation
Now let’s explore various algorithms Redis provides to evict keys.
Redis Eviction Policies or Eviction Algorithms in Redis
LRU: Least Recently Used eviction removes keys with oldest access time first. LRU approximates the LRU ideal based on sampling keys.
LFU: Least Frequently Used eviction removes keys with lowest frequency of reads/writes. Great for caching stale data.
TTL: Keys with nearest expiring TTL are evicted first. Works best for time-bound cached data.
AllKeys: Evicts random keys ignoring logic or patterns.
NoEviction: Returns errors on write if no memory available.
LRU is the default and preferred option for generic workloads with fairly regular access patterns. But always test eviction efficacy against real-world usage before picking the right policy.
Now let’s understand these eviction algorithms in more detail.
Least Recently Used (LRU) Eviction
LRU eviction, as the name describes, removes the least recently used keys first once maxmemory is reached. The LRU algorithm provides the best performance in most scenarios as it directly correlates with what applications access most frequently vs least recently.
However, for accuracy keeping an exact LRU sorted list is inefficient. So Redis samples a subset of keys and estimates approximate LRU based on that subset.
The maxmemory-samples config controls the sampling precision:
# Check 10 keys per eviction
# Higher values give better LRU approximation
Some downsides of LRU are discarding older data which might still be valid, and failure to account for access frequency.
Least Frequently Used (LFU) Eviction
In contrast to LRU, LFU evicts keys with the lowest access frequency first. So infrequently read keys get removed first regardless of how recently they were accessed.
LFU works better than LRU for workloads where older data is still frequently accessed, like cached aggregates or summed values. But it requires maintaining access counters on all keys which incurs memory overhead.
The counter decaytime setting allows controlling the counter half-life for frequency calculation.
TTL Eviction Policy
TTL based eviction simply picks keys about to expire based on nearest TTL value first for removal. This works very well for data that is explicitly time-bound cached via Redis TTLs.
NoEviction and AllKeys Policies
NoEviction returns errors on write when maxmemory reached rather than evicting keys. This prevents data loss but can lead to availability issues during traffic spikes.
AllKeys evicts random keys ignoring access patterns. This can unjustly evict recently used keys making it unsuitable for most workloads.
Selecting the Optimal Eviction Policy
So which eviction strategy should you choose? There are no absolute rules, but some guidance:
- Generic caching – Use LRU
- Time-based ephemeral data – Prefer TTL
- Data with older reads – Give LFU a try
- Key quotas use cases – AllKeys randomness might work
- Write only spikes – NoEviction to fail gracefully
Understand your access patterns, data lifetimes and performance needs. Then run A/B tests representatively comparing eviction options. The best eviction preserves keys matching real-world usage.
Tuning and Optimizations
Some techniques help further improve eviction efficacy and memory utilization:
- Set sampled LRU precision higher for better approximation
- Adjust LFU counter decay time based on access intervals
- Assign TTLs aligned with business logic and use patterns
- Profile memory breakdown across data structures
- Set memory quota for costly structures like sorted sets
- Compress bitmaps, ints for savings – but balance CPU usage
- Offload less critical data to disk or Redis on Flash
Combining eviction policies with fine-tuned configs makes Redis retain most relevant data in memory.
Pitfalls and Bad Practices
Some common eviction issues that can degrade Redis performance:
- Not specifying maxmemory leading to swapping
- Keeping default policy LRU despite better alternatives
- Setting sampling too low impacting LRU resolution
- Forgetting to assign TTLs aligned to access intervals
- Failing to expire, resize keys leading to huge memory waste
- Excessive expiry churn killing performance through fragmenting memory
- Not testing real-world patterns before picking policy
Leveraging eviction well requires forethought, testing and active tuning.
Debugging Eviction Behavior
To understand and debug eviction behavior, Redis provides a few useful commands:
- CONFIG GET memory – View current memory configs
- MEMORY USAGE <key> – Estimate memory for key and value
- MEMORY STATS – Breakdown memory by data structure
- MEMORY DOCTOR – Identify biggest keys for optimization
The DEBUG command is great for simulating different maxmemory scenarios:
redis> DEBUG MAXMEMORY 128mb # Simulate 128MB limit
redis> DEBUG MAXMEMORY 100mb # Change simulated limit
These help build mental models of eviction allowing tweaks tailored to application data and usage.
Extending Eviction with Active Expiry
Redis Active Expiry allows expiring keys based on more parameters than just TTL:
# Evict keys by LFU counter < 10
redis> ACTIVEEXPIRE lfu-10-allkeys
# Evict keys older than 180 sec
redis> ACTIVEEXPIRE at 180 sec allkeys
# Expire keys using custom Lua script logic
redis> ACTIVEEXPIRE by lua-script
This provides advanced eviction capabilities like frequency, idle time and granular Lua conditions.
Testing Eviction Performance
Load testing Redis configurations helps validate eviction efficacy for real-world conditions:
- Write performance – Measure ops/sec, latency distribution, errors as memory fills
- Memory utilization – Profile usage, breakdown, churn during test
- Eviction flow – Chart evicted keys over time by policy
- Replication impact – Account for traffic multipliers from replicas
- Failure handling – Induce failures and measure recovery
Isolating the variable being tested and profiling data from multiple perspectives provides actionable performance insights on eviction.
Optimizing Memory Management
Beyond eviction policies, additional ways to optimize Redis memory utilization include:
- Carefully sizing data types avoiding waste
- Compressing large strings, ints with libraries
- Converting sorted set scores into ziplists
- Using smaller encoded data types where possible
- Structuring related fields into hashes
- Reducing duplication through smart key design
- Allocating memory quota for costly data types
- Periodically trimming, deleting old stream data
Holistic memory optimization combines both storage optimization and eviction for best data retention.
Alternatives to Eviction
Sometimes, eviction may not suffice especially on memory-starved systems. Alternatives worth considering are:
- Upgrading system memory for more headroom
- Offloading less critical data to disk or SSD
- Using Redis Cluster to split memory across nodes
- Redis on Flash provides hybrid memory tiering
- Group less active data under Redis Static Hashes module
Based on visibility into key memory utilization and worthiness, several approaches can prevent eviction drawbacks.
Eviction strategies allow Redis to deliver in-memory performance while staying within system memory limits. Different data and access patterns warrant tailored policies that retain the most valuable data in memory through alerts. But blindly relying on default eviction is suboptimal. With care taken to actively monitor, test realistically and optimize configurations using tools like TTLs, sampling and Active Expiry, eviction provides a reliable bulwark against memory exhaustion. When combined with data modeling and sizing techniques, eviction unlocks Redis’ true potential.
Q1: How does setting an expiry policy differ from expiring keys manually?
Expiry policies automatically expire keys based on criteria like LRU or LFU when memory limits are hit. Manual expiration requires proactively setting TTLs anticipating usage.
Q2: When should I avoid using LRU for eviction?
Avoid LRU if data shows access patterns not correlating with most recent activity. LRU would unjustly evict frequently accessed but older keys.
Q3: How will NoEviction policy behave if Redis memory is full?
NoEviction will deny new writes with an error instead of evicting existing keys. So clients must handle write errors gracefully.
Q4: Is there overhead to increasing the LRU precision sampling rate?
Yes, higher sampling does increase memory overhead slightly but improves approximations of true LRU ordering.