Hangfire is a production-grade background job library for .NET with persistent job storage in SQL Server (or Redis). Unlike Channel-based queues, Hangfire jobs survive application restarts — they are stored in the database before execution begins. Built-in features include automatic retry on failure, job continuation (run B after A succeeds), recurring jobs with cron expressions, and a web dashboard for monitoring and managing jobs. Hangfire is the right choice when job durability, retry management, and operational visibility are required.
Hangfire Setup and Job Types
// dotnet add package Hangfire.AspNetCore
// dotnet add package Hangfire.SqlServer
// ── Program.cs — Hangfire configuration ───────────────────────────────────
builder.Services.AddHangfire(config => config
.SetDataCompatibilityLevel(CompatibilityLevel.Version_180)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage(builder.Configuration.GetConnectionString("Default"),
new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero, // long polling
UseExpirationManager = true,
}));
builder.Services.AddHangfireServer(opts =>
{
opts.WorkerCount = 10; // concurrent job workers
opts.Queues = ["critical", "default", "low"];
});
// Dashboard — secure with auth in production
app.MapHangfireDashboard("/hangfire", new DashboardOptions
{
Authorization = new[] { new HangfireAdminAuthorization() }
});
// ── DI-compatible job classes ─────────────────────────────────────────────
public class EmailJobs(IEmailService emailService, ILogger<EmailJobs> logger)
{
// Fire-and-forget job (enqueued to run ASAP)
public async Task SendConfirmationEmailAsync(string userId, string email, string confirmUrl)
{
await emailService.SendConfirmationEmailAsync(email, confirmUrl);
logger.LogInformation("Confirmation email sent to {Email}", email);
}
// Recurring job — runs on schedule
public async Task CleanExpiredTokensAsync()
{
// Injected DbContext works fine — Hangfire creates a DI scope per job
await _db.RefreshTokens.Where(t => t.ExpiresAt < DateTime.UtcNow).ExecuteDeleteAsync();
}
}
// ── Registering jobs ──────────────────────────────────────────────────────
// Fire-and-forget (runs immediately, once)
BackgroundJob.Enqueue<EmailJobs>(j =>
j.SendConfirmationEmailAsync(user.Id, user.Email, confirmUrl));
// Delayed job (runs after 5 minutes)
BackgroundJob.Schedule<EmailJobs>(j =>
j.SendPasswordResetReminderAsync(user.Email),
TimeSpan.FromMinutes(5));
// Recurring job (cron — every day at 2 AM)
RecurringJob.AddOrUpdate<EmailJobs>("cleanup-tokens",
j => j.CleanExpiredTokensAsync(),
Cron.Daily(hour: 2));
// Continuation (run B after A completes)
var jobId = BackgroundJob.Enqueue<PostJobs>(j => j.IndexForSearchAsync(postId));
BackgroundJob.ContinueJobWith<NotificationJobs>(jobId,
j => j.NotifySubscribersAsync(postId));
BackgroundJob.Enqueue(), the job is written to the Hangfire tables in the same SQL Server as your application data. If the application crashes after enqueuing but before execution, the job is still in the database — Hangfire picks it up on next restart. If the job fails during execution, Hangfire retries it up to 10 times by default with exponential backoff. This durability makes Hangfire the right choice for critical operations like payment processing, report generation, and data exports.[Queue("critical")] on job classes/methods routes them to the critical queue, which has workers that process only critical jobs. Normal email notifications go to the “default” queue; report generation to “low”. Configure separate worker pools per queue: opts.Queues = new[] { "critical", "default", "low" }. Critical jobs are never delayed by a backlog of low-priority report generation jobs.IDashboardAuthorizationFilter that requires Admin role. Without authentication, the dashboard is a goldmine of operational information for attackers and violates data protection regulations by exposing PII in job arguments to unauthorised viewers.Channel vs Hangfire — When to Use Each
| Concern | Channel (BackgroundService) | Hangfire |
|---|---|---|
| Job durability (survives restart) | No | ✅ Yes (database) |
| Retry on failure | Manual | ✅ Automatic (10 retries) |
| Recurring jobs (cron) | Manual loop | ✅ Built-in |
| Job visibility/monitoring | Logs only | ✅ Web dashboard |
| Distributed (multi-instance) | Per-process only | ✅ Shared SQL storage |
| Overhead | Minimal | SQL queries per job |
| Use for | Non-critical, high-throughput | Critical, low-volume, durable |
Common Mistakes
Mistake 1 — Not securing the Hangfire dashboard in production (exposes job details)
❌ Wrong — dashboard accessible at /hangfire without authentication; job arguments with PII visible to anyone.
✅ Correct — implement IDashboardAuthorizationFilter requiring Admin role before serving the dashboard.
Mistake 2 — Using Hangfire for high-throughput simple work (SQL overhead per job)
❌ Wrong — Hangfire for 10,000 search indexing events per minute; SQL inserts/selects per job add up.
✅ Correct — use Channel for high-throughput non-critical work; Hangfire for critical low-volume durable jobs.