Distributed Caching — Redis with IDistributedCache and StackExchange.Redis

Distributed caching stores cached data in an external service shared by all application instances. When Server A caches a post and Server B receives the next request for the same post, Server B finds it in Redis — no database query. Redis is the standard distributed cache for ASP.NET Core: sub-millisecond read/write, rich data structures, pub/sub for cache invalidation signals, and atomic operations for distributed locks. The IDistributedCache abstraction works with Redis, SQL Server, and NCache — swappable without changing application code.

Redis Distributed Cache

// dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis

// ── Registration ──────────────────────────────────────────────────────────
builder.Services.AddStackExchangeRedisCache(opts =>
{
    opts.Configuration = builder.Configuration.GetConnectionString("Redis");
    opts.InstanceName  = "BlogApp:";   // prefix all keys — isolates from other apps
});

// ── Typed distributed cache service ──────────────────────────────────────
public class RedisCacheService(IDistributedCache cache) : IRedisCacheService
{
    public async Task<T?> GetAsync<T>(string key, CancellationToken ct = default)
    {
        var bytes = await cache.GetAsync(key, ct);
        if (bytes is null) return default;
        return JsonSerializer.Deserialize<T>(bytes,
            new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase });
    }

    public async Task SetAsync<T>(string key, T value,
        TimeSpan? absoluteExpiry = null,
        TimeSpan? slidingExpiry  = null,
        CancellationToken ct     = default)
    {
        var bytes = JsonSerializer.SerializeToUtf8Bytes(value,
            new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase });

        await cache.SetAsync(key, bytes, new DistributedCacheEntryOptions
        {
            AbsoluteExpirationRelativeToNow = absoluteExpiry ?? TimeSpan.FromMinutes(5),
            SlidingExpiration               = slidingExpiry,
        }, ct);
    }

    public async Task RemoveAsync(string key, CancellationToken ct = default)
        => await cache.RemoveAsync(key, ct);

    // GetOrCreateAsync — cache-aside for distributed cache
    public async Task<T?> GetOrCreateAsync<T>(
        string key, Func<Task<T?>> factory,
        TimeSpan? expiry = null, CancellationToken ct = default)
    {
        var cached = await GetAsync<T>(key, ct);
        if (cached is not null) return cached;

        var value = await factory();
        if (value is not null)
            await SetAsync(key, value, expiry ?? TimeSpan.FromMinutes(5), null, ct);
        return value;
    }
}

// ── Usage in PostService ──────────────────────────────────────────────────
public async Task<PostDto?> GetByIdAsync(int id, CancellationToken ct)
{
    var key = $"post:{id}";
    return await _redis.GetOrCreateAsync(key,
        async () => (await _repo.GetByIdAsync(id, ct))?.ToDto(),
        expiry: TimeSpan.FromMinutes(10),
        ct: ct);
}
Note: The InstanceName = "BlogApp:" setting prefixes all keys stored by this application: post:42 becomes BlogApp:post:42 in Redis. This prevents key collisions when multiple applications share one Redis instance — a common scenario in development where one Redis serves multiple projects. In production, dedicated Redis instances per environment (staging, production) are recommended, but key prefixing is still a good practice as a secondary isolation layer.
Tip: Use cache stampede prevention for high-traffic endpoints: when a cache entry expires, only one request should rebuild it while others wait or serve the stale value. Implement with SemaphoreSlim: the first thread that misses the cache acquires the semaphore and rebuilds; concurrent threads either wait for the semaphore (queue behind the rebuilder) or receive a brief stale value if you implement the “jitter” pattern. Without stampede prevention, 100 concurrent requests all missing the same cache key simultaneously hit the database 100 times.
Warning: Redis is an external dependency — it can be slow, down, or unreachable. Always implement a fallback when Redis is unavailable: catch RedisException or SocketException, log the error, and fall back to the database. Never let a Redis failure bring down the entire API. Implement a circuit breaker (Polly) around Redis calls: after N failures in a time window, open the circuit and bypass Redis temporarily, falling back to the database until Redis recovers.

Cache Key Conventions

// ── Consistent key naming — hierarchical, colon-separated ─────────────────
// Single entity:       post:{id}                    → post:42
// User-specific:       user:{userId}:profile         → user:abc123:profile
// Collection page:     posts:published:page:{n}:{size} → posts:published:page:1:10
// Collection by tag:   posts:tag:{slug}:page:{n}    → posts:tag:dotnet:page:1
// Search result:       posts:search:{hash(query)}   → posts:search:a4f3...
// Config/lookup:       config:categories             → config:categories

public static class CacheKeys
{
    public static string Post(int id)             => $"post:{id}";
    public static string PostBySlug(string slug)  => $"post:slug:{slug}";
    public static string PublishedPage(int p, int s) => $"posts:published:{p}:{s}";
    public static string UserProfile(string uid)  => $"user:{uid}:profile";
    public static string Categories()             => "config:categories";
}

Common Mistakes

Mistake 1 — No fallback when Redis is down (entire API fails)

❌ Wrong — Redis exception propagates; all API endpoints return 500.

✅ Correct — wrap Redis calls in try/catch; log and fall back to database on Redis unavailability.

Mistake 2 — Deserialising stale/corrupted cached bytes without error handling

❌ Wrong — cached bytes from an old schema version; deserialization throws; 500 error.

✅ Correct — catch JsonException on deserialization; treat as cache miss and reload from database.

🧠 Test Yourself

A Web API has 5 replicas. Post #42 is cached in Redis. The post is updated. How do all 5 replicas serve the fresh data immediately?