Cache invalidation — knowing when to remove or update cached data — is one of the hardest problems in software engineering. The challenge: cached data represents a point-in-time snapshot that becomes stale when the underlying data changes. Too-aggressive invalidation loses the performance benefit of caching; too-conservative invalidation serves stale data. Effective cache invalidation requires designing invalidation alongside cache population, not as an afterthought.
Invalidation Strategies
// ── Strategy 1: TTL-Based Expiry (simplest, eventual consistency) ──────────
// Accept that data may be stale for up to TTL duration
// No code needed — just set an appropriate expiration time
// Best for: reference data, configuration, low-write data
// ── Strategy 2: Write-Through (update cache on write) ─────────────────────
public async Task UpdatePostAsync(int id, UpdatePostRequest request, CancellationToken ct)
{
// Update database first
var updatedPost = await _repo.UpdateAsync(id, request, ct)
?? throw new NotFoundException("Post", id);
// Update cache immediately (write-through)
var cacheKey = CacheKeys.Post(id);
await _redis.SetAsync(cacheKey, updatedPost.ToDto(),
expiry: TimeSpan.FromMinutes(10), ct: ct);
// Invalidate list caches that might contain the old post
await _redis.RemoveAsync(CacheKeys.PostBySlug(updatedPost.Slug), ct);
// Output cache invalidation by tag
await _outputCacheStore.EvictByTagAsync("posts", ct);
}
// ── Strategy 3: Event-Driven Invalidation (pub/sub) ───────────────────────
// When a post is published, broadcast an invalidation message via Redis pub/sub
// All app instances subscribe and invalidate their local IMemoryCache
public class CacheInvalidationService(IConnectionMultiplexer redis)
{
private const string Channel = "cache:invalidate:posts";
public async Task PublishInvalidationAsync(int postId)
{
var subscriber = redis.GetSubscriber();
await subscriber.PublishAsync(Channel, postId.ToString());
}
public void SubscribeToInvalidations(Action<int> onInvalidate)
{
var subscriber = redis.GetSubscriber();
subscriber.Subscribe(Channel, (channel, message) =>
{
if (int.TryParse(message, out int postId))
onInvalidate(postId);
});
}
}
// Register: subscribe at startup to invalidate IMemoryCache across instances
using (var scope = app.Services.CreateScope())
{
var svc = scope.ServiceProvider.GetRequiredService<CacheInvalidationService>();
var mCache = scope.ServiceProvider.GetRequiredService<IMemoryCache>();
svc.SubscribeToInvalidations(postId => mCache.Remove(CacheKeys.Post(postId)));
}
posts:published:page:1, posts:published:page:2, posts:category:dotnet:page:1. When a post is published, you know you need to invalidate all posts:* keys. With Redis, you can use KEYS posts:* (or SCAN for production) to find and delete them, or use the output cache tag system which does this tracking automatically. Ad-hoc key patterns that cannot be enumerated make invalidation nearly impossible.Cache Key Patterns for Invalidation
// ── When post #42 is updated: invalidate these keys ────────────────────────
await _redis.RemoveAsync($"post:42");
await _redis.RemoveAsync($"post:slug:{post.Slug}");
// Invalidate all paginated list pages (they might contain post #42)
await _outputCacheStore.EvictByTagAsync("posts");
// ── When post #42 is published: invalidate more ────────────────────────────
await _redis.RemoveAsync($"post:42");
await _redis.RemoveAsync($"post:slug:{post.Slug}");
await _outputCacheStore.EvictByTagAsync("posts"); // all post lists
await _outputCacheStore.EvictByTagAsync("home-feed"); // home page feed
// Publish invalidation to all app instances (for local IMemoryCache)
await _cacheInvalidation.PublishInvalidationAsync(42);
Common Mistakes
Mistake 1 — Invalidating too little (stale data served until TTL)
❌ Wrong — updating post but only invalidating post:42; all paginated lists still serve old excerpt.
✅ Correct — when an entity changes, invalidate ALL cache entries that contain that entity’s data.
Mistake 2 — Using Redis KEYS pattern in production (blocks Redis while scanning)
❌ Wrong — KEYS posts:* blocks Redis for the duration of the scan; production outage.
✅ Correct — use SCAN with cursor for non-blocking key discovery, or better: use output cache tags which track entries automatically.