Data Protection API — Encrypting Sensitive Data

The ASP.NET Core Data Protection API is a cryptography subsystem for protecting sensitive data — encrypting payloads that must be stored or transmitted and decrypted later. It is used internally by ASP.NET Core for anti-forgery tokens, cookie encryption, and temp data. You can use it directly to protect any sensitive string or byte array: password reset tokens, email confirmation tokens, two-factor codes, or any data that needs to be encrypted at rest or in transit. In a multi-instance deployment (multiple web servers), key storage must be shared — each server must be able to decrypt data protected by any other server.

Using IDataProtector

// ── Registration — done automatically by ASP.NET Core ─────────────────────
// AddDataProtection is called by WebApplication.CreateBuilder automatically.
// You only need explicit configuration for key storage and key lifetime.

// ── Inject and use IDataProtectionProvider ────────────────────────────────
public class TokenService(
    IDataProtectionProvider protectionProvider,
    ILogger<TokenService> logger)
{
    // Purpose string isolates this protector from others — tokens from one
    // purpose cannot be decrypted by a protector with a different purpose
    private readonly IDataProtector _protector =
        protectionProvider.CreateProtector("BlogApp.TokenService.EmailConfirmation");

    public string CreateEmailConfirmationToken(string userId, string email)
    {
        var payload = $"{userId}:{email}:{DateTime.UtcNow:O}";
        string token = _protector.Protect(payload);
        logger.LogDebug("Created email confirmation token for {UserId}", userId);
        return token;   // URL-safe base64 encoded encrypted string
    }

    public (string UserId, string Email, DateTime CreatedAt) ValidateEmailToken(string token)
    {
        try
        {
            string payload = _protector.Unprotect(token);
            var parts = payload.Split(':');
            return (parts[0], parts[1], DateTime.Parse(parts[2]));
        }
        catch (CryptographicException ex)
        {
            // Token is invalid, tampered, expired, or from a different key
            logger.LogWarning(ex, "Invalid email confirmation token.");
            throw new SecurityTokenException("Invalid or expired token.");
        }
    }
}

// ── Time-limited protector — token expires after 24 hours ─────────────────
private readonly ITimeLimitedDataProtector _timeLimitedProtector =
    protectionProvider
        .CreateProtector("BlogApp.PasswordReset")
        .ToTimeLimitedDataProtector();

public string CreatePasswordResetToken(string userId)
    => _timeLimitedProtector.Protect(userId, lifetime: TimeSpan.FromHours(24));

public string ValidatePasswordResetToken(string token)
    => _timeLimitedProtector.Unprotect(token);   // throws if expired
Note: The purpose string is critical — it provides cryptographic isolation between different uses of the Data Protection API. Data protected with purpose “EmailConfirmation” cannot be decrypted by a protector with purpose “PasswordReset” even if both use the same underlying key. This prevents a password reset token from being used as an email confirmation token. Always use specific, descriptive purpose strings that include your application name and the use case. Avoid generic purposes like “auth” or “token”.
Tip: For multi-server deployments (horizontal scaling, Docker Swarm, Kubernetes), configure a shared key storage location so all instances share the same key ring. Without shared key storage, each server generates its own keys — a user’s session cookie encrypted on Server A cannot be decrypted on Server B after a load balancer routes them differently, causing mysterious logouts and anti-forgery failures. Options: Azure Blob Storage with Azure Key Vault encryption, Redis with Key Vault, or a shared database table.
Warning: Data Protection keys expire by default after 90 days. When a key expires, new payloads are protected with a new key, but old payloads protected with the expired key can still be decrypted (keys are kept for their lifetime plus additional retention). If you delete old keys from the key ring, tokens and cookies protected with those keys become permanently undecryptable — all users are effectively logged out. Never manually delete keys from the key ring; let the system manage expiry.

Key Storage Configuration

// ── File system key storage (single server or shared network path) ─────────
builder.Services.AddDataProtection()
    .PersistKeysToFileSystem(new DirectoryInfo("/var/keys"))
    .SetApplicationName("BlogApp")        // must be same across all instances
    .SetDefaultKeyLifetime(TimeSpan.FromDays(90));

// ── Azure Blob Storage + Key Vault (multi-server, production) ─────────────
// dotnet add package Microsoft.AspNetCore.DataProtection.AzureStorage
// dotnet add package Microsoft.AspNetCore.DataProtection.AzureKeyVault
builder.Services.AddDataProtection()
    .PersistKeysToAzureBlobStorage(
        new Uri("https://blogappstorage.blob.core.windows.net/keys/keys.xml"),
        new DefaultAzureCredential())
    .ProtectKeysWithAzureKeyVault(
        new Uri("https://blogappvault.vault.azure.net/keys/data-protection"),
        new DefaultAzureCredential())
    .SetApplicationName("BlogApp");

// ── Redis key storage (for Redis-based session/cache setups) ──────────────
// builder.Services.AddDataProtection()
//     .PersistKeysToStackExchangeRedis(redis, "DataProtection-Keys")
//     .SetApplicationName("BlogApp");

Common Mistakes

Mistake 1 — Not configuring shared key storage in multi-server deployments

❌ Wrong — each server has its own key ring; session cookies, anti-forgery tokens fail across servers.

✅ Correct — configure Azure Blob, Redis, or database key storage shared across all instances.

Mistake 2 — Using the same purpose string for different token types

❌ Wrong — password reset tokens can be decoded as email confirmation tokens:

✅ Correct — always use specific purpose strings: “BlogApp.EmailConfirmation”, “BlogApp.PasswordReset”.

🧠 Test Yourself

In a Kubernetes deployment with 5 replicas, a user logs in on Pod 1. On the next request, the load balancer routes them to Pod 3. Without shared key storage, what happens to their session cookie?