Database Management
Redis
Subjective
Oct 05, 2025
What are Redis memory management and eviction policies?
Detailed Explanation
Redis memory management and eviction strategies:
**Memory Policies:**
# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru
**Eviction Policies:**
• allkeys-lru**: Remove least recently used keys
• allkeys-lfu**: Remove least frequently used keys
• allkeys-random**: Remove random keys
• volatile-lru**: Remove LRU keys with expire set
• volatile-lfu**: Remove LFU keys with expire set
• volatile-random**: Remove random keys with expire set
• volatile-ttl**: Remove keys with shortest TTL
• noeviction**: Return errors when memory limit reached
**Memory Monitoring:**
# Check memory usage
INFO memory
MEMORY USAGE key
MEMORY STATS
# Find memory-hungry keys
redis-cli --bigkeys
redis-cli --memkeys
**Memory Optimization Techniques:**
# 1. Use appropriate data structures
# Hash for small objects (more memory efficient)
r.hmset('user:1000', {
'name': 'John',
'email': 'john@example.com',
'age': 30
})
# 2. Set expiration on temporary data
r.setex('session:abc123', 3600, session_data)
r.expire('cache:product:456', 1800)
# 3. Use compression for large values
import gzip
import json
data = json.dumps(large_object)
compressed = gzip.compress(data.encode())
r.set('large:object:1', compressed)
# 4. Efficient key naming
# Bad: "user:profile:personal:information:1000"
# Good: "u:p:1000"
**Memory Configuration:**
# Compression settings
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# Memory sampling
maxmemory-samples 5
**Monitoring Script:**
def monitor_memory():
info = r.info('memory')
used_memory = info['used_memory']
max_memory = info.get('maxmemory', 0)
fragmentation = info['mem_fragmentation_ratio']
if max_memory > 0:
usage_percent = (used_memory / max_memory) * 100
if usage_percent > 80:
print(f"High memory usage: {usage_percent:.1f}%")
if fragmentation > 1.5:
print(f"High fragmentation: {fragmentation:.2f}")
# Check for large keys
large_keys = []
for key in r.scan_iter(count=1000):
size = r.memory_usage(key)
if size and size > 1024 * 1024: # 1MB
large_keys.append((key, size))
if large_keys:
print(f"Found {len(large_keys)} large keys")
**Best Practices:**
• Set appropriate maxmemory limit
• Choose eviction policy based on use case
• Monitor memory usage and fragmentation
• Use TTL on temporary data
• Optimize data structures
• Regular memory analysis
Discussion (0)
No comments yet. Be the first to share your thoughts!
Share Your Thoughts