Post

Implementing Caching Strategies with Redis

Implementing Caching Strategies with Redis

Implementing Caching Strategies with Redis

Caching is a fundamental technique for improving application performance. Redis, an in-memory data structure store, offers powerful caching capabilities that can dramatically speed up your applications. In this post, I’ll explore effective Redis caching strategies and implementations.

Why Redis for Caching?

Redis excels as a caching solution because it:

  • Provides in-memory storage with persistence options
  • Supports diverse data structures (strings, lists, sets, hashes)
  • Offers built-in expiration policies
  • Enables atomic operations
  • Scales through clustering and replication

Basic Caching Patterns

Cache-Aside (Lazy Loading)

The most common caching pattern:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function getData($key) {
    // Try to get data from cache
    $cachedData = redis->get($key);
    
    if ($cachedData) {
        return json_decode($cachedData);
    }
    
    // If not in cache, get from database
    $data = database->query("SELECT * FROM data WHERE id = ?", $key);
    
    // Store in cache for next time (expire after 1 hour)
    redis->setex($key, 3600, json_encode($data));
    
    return $data;
}

Write-Through

Update the cache whenever the database is updated:

1
2
3
4
5
6
7
8
9
function updateData($key, $value) {
    // Update the database
    database->query("UPDATE data SET value = ? WHERE id = ?", $value, $key);
    
    // Update the cache
    redis->setex($key, 3600, json_encode($value));
    
    return true;
}

Write-Behind (Write-Back)

Buffer writes to the database:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
function updateData($key, $value) {
    // Update the cache immediately
    redis->setex($key, 3600, json_encode($value));
    
    // Add to write queue
    redis->rpush('db_write_queue', json_encode(['key' => $key, 'value' => $value]));
    
    return true;
}

// In a background process:
function processWriteQueue() {
    while (true) {
        $item = redis->blpop('db_write_queue', 5);
        if ($item) {
            $data = json_decode($item[1]);
            database->query("UPDATE data SET value = ? WHERE id = ?", 
                $data->value, $data->key);
        }
    }
}

Advanced Caching Strategies

Cache Stampede Prevention

Cache stampedes occur when many requests hit a missing cache key simultaneously:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
function getDataWithLock($key) {
    $cachedData = redis->get($key);
    
    if ($cachedData) {
        return json_decode($cachedData);
    }
    
    // Try to acquire a lock
    $lockKey = "lock:{$key}";
    $gotLock = redis->setnx($lockKey, 1);
    redis->expire($lockKey, 10); // Lock expires after 10 seconds
    
    if ($gotLock) {
        try {
            // This process will generate the cache
            $data = database->query("SELECT * FROM data WHERE id = ?", $key);
            redis->setex($key, 3600, json_encode($data));
            return $data;
        } finally {
            // Release the lock
            redis->del($lockKey);
        }
    } else {
        // Another process is generating the cache, wait briefly and retry
        sleep(0.1);
        return getDataWithLock($key);
    }
}

Sliding Expiration

Extend the TTL when an item is accessed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
function getDataWithSlidingExpiration($key) {
    $cachedData = redis->get($key);
    
    if ($cachedData) {
        // Reset the expiration time
        redis->expire($key, 3600);
        return json_decode($cachedData);
    }
    
    $data = database->query("SELECT * FROM data WHERE id = ?", $key);
    redis->setex($key, 3600, json_encode($data));
    
    return $data;
}

Hierarchical Caching

Cache different levels of data granularity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
function getUserData($userId) {
    // Try to get complete user data
    $userData = redis->get("user:{$userId}");
    
    if ($userData) {
        return json_decode($userData);
    }
    
    // Build user data from components
    $basicInfo = getUserBasicInfo($userId);
    $preferences = getUserPreferences($userId);
    $history = getUserHistory($userId);
    
    $data = [
        'basic' => $basicInfo,
        'preferences' => $preferences,
        'history' => $history
    ];
    
    // Cache the complete data
    redis->setex("user:{$userId}", 3600, json_encode($data));
    
    return $data;
}

function getUserBasicInfo($userId) {
    // Try cache first
    $info = redis->get("user:{$userId}:basic");
    
    if ($info) {
        return json_decode($info);
    }
    
    // Get from database
    $info = database->query("SELECT * FROM users WHERE id = ?", $userId);
    
    // Cache with longer TTL since this changes less frequently
    redis->setex("user:{$userId}:basic", 7200, json_encode($info));
    
    return $info;
}

// Similar functions for preferences and history

Cache Invalidation Strategies

Time-Based Expiration

The simplest approach:

1
2
// Cache for 1 hour
redis->setex($key, 3600, $value);

Manual Invalidation

Explicitly remove items when data changes:

1
2
3
4
5
6
7
8
9
10
11
function updateUser($userId, $data) {
    // Update database
    database->query("UPDATE users SET name = ? WHERE id = ?", 
        $data['name'], $userId);
    
    // Invalidate cache
    redis->del("user:{$userId}");
    redis->del("user:{$userId}:basic");
    
    return true;
}

Version/Tag-Based Invalidation

Add versions to cache keys:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
function getUserWithVersion($userId) {
    // Get current version
    $version = redis->get("version:user:{$userId}") ?: 1;
    
    // Try to get cache with version
    $key = "user:{$userId}:v{$version}";
    $userData = redis->get($key);
    
    if ($userData) {
        return json_decode($userData);
    }
    
    // Get from database
    $data = database->query("SELECT * FROM users WHERE id = ?", $userId);
    
    // Cache with version
    redis->setex($key, 3600, json_encode($data));
    
    return $data;
}

function invalidateUser($userId) {
    // Increment version, making all previous caches obsolete
    redis->incr("version:user:{$userId}");
}

Monitoring and Optimization

To ensure your Redis cache performs optimally:

  1. Monitor memory usage: Use redis-cli info memory
  2. Set appropriate maxmemory policies: Configure maxmemory-policy based on your needs
  3. Use pipelining for bulk operations: Reduce network overhead
  4. Consider data compression: For large values, compress before storing
  5. Implement metrics: Track hit/miss ratios to evaluate effectiveness

Conclusion

Redis offers powerful and flexible caching capabilities that can significantly improve application performance. By implementing appropriate caching strategies—like cache-aside, write-through, and hierarchical caching—and carefully managing cache invalidation, you can build highly responsive applications that reduce database load and deliver better user experiences.

When implementing Redis caching, always consider your specific workload patterns and data access requirements to choose the most effective strategy.

This post is licensed under CC BY 4.0 by the author.