Rate Limiting and Brute-Force Protection

Rate limiting controls how many requests a client can make to your API within a time window. Without it, a single client โ€” or an attacker โ€” can hammer your API with thousands of requests per second, exhausting server resources, overloading your database, and making the service unavailable for legitimate users. Brute-force attacks on login endpoints systematically try thousands of passwords; credential-stuffing attacks replay leaked password databases. Rate limiting is the primary defence against all of these. This lesson builds a layered rate-limiting strategy using express-rate-limit with different limits for different route categories.

Rate Limiting Strategies

Strategy How It Works Best For
Fixed Window Count resets at fixed intervals (every minute) Simple cases โ€” easy to implement
Sliding Window Rolling time window โ€” smoother limiting Preventing burst traffic at window boundaries
Token Bucket Tokens refill at a fixed rate โ€” allows bursts APIs that should allow short bursts
Leaky Bucket Requests processed at a fixed rate regardless of burst Smoothing bursty traffic

express-rate-limit Options

Option Type Default Description
windowMs number 60000 Time window in milliseconds
max number 5 Max requests per window per IP
message string/object default text Response body when limit exceeded
statusCode number 429 HTTP status when limit exceeded
standardHeaders boolean/string 'draft-7' Send RateLimit-* headers
legacyHeaders boolean false Send X-RateLimit-* headers
keyGenerator function req.ip How to identify the client
skip function โ€” Return true to skip rate limiting for a request
skipSuccessfulRequests boolean false Only count failed requests toward limit
skipFailedRequests boolean false Only count successful requests toward limit
store Store MemoryStore Where to store counts โ€” Redis for multi-instance
Note: The default MemoryStore keeps rate limit counters in Node.js process memory. This works for a single-instance deployment but fails for multi-instance setups (multiple Node.js processes, Docker replicas, load-balanced servers) because each instance has its own counter โ€” a client can exceed the limit by routing requests to different instances. For production multi-instance deployments, use rate-limit-redis or rate-limit-mongo as the shared store.
Tip: Apply different rate limits to different route categories. Authentication endpoints (login, register, forgot-password) should have the strictest limits (5-10 requests per 15 minutes) because they are the primary brute-force targets. General API endpoints can be more permissive (100-200 requests per minute). File upload endpoints should be limited by both request count and total bytes. This layered approach gives maximum protection where it matters without blocking legitimate API usage.
Warning: IP-based rate limiting can be ineffective behind a NAT or corporate proxy where many users share a single IP address. Blocking that IP blocks all users behind it. Consider user-based rate limiting for authenticated routes (key by req.user.id) in addition to IP-based limits. Also ensure app.set('trust proxy', 1) is configured so req.ip returns the real client IP from the X-Forwarded-For header when behind a load balancer, not the load balancer’s IP.

Complete Rate Limiting Setup

// npm install express-rate-limit
// npm install rate-limit-redis ioredis  (for multi-instance production)

const rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const Redis      = require('ioredis');

// โ”€โ”€ Redis client for shared store (production) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const redis = process.env.REDIS_URL
    ? new Redis(process.env.REDIS_URL)
    : null;

function createStore() {
    if (!redis) return undefined;   // fallback to MemoryStore in development
    return new RedisStore({
        sendCommand: (...args) => redis.call(...args),
        prefix:      'rl:',
    });
}

// โ”€โ”€ Rate limit factory โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
function createRateLimit({ windowMs, max, message, skipSuccessfulRequests = false }) {
    return rateLimit({
        windowMs,
        max,
        message: {
            success: false,
            message,
            retryAfter: Math.ceil(windowMs / 1000),
        },
        statusCode:             429,
        standardHeaders:        'draft-7',  // sends RateLimit-* headers
        legacyHeaders:          false,
        store:                  createStore(),
        skipSuccessfulRequests,
    });
}

// โ”€โ”€ Auth limiter โ€” very strict โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// 5 login attempts per 15 minutes per IP
const authLimiter = createRateLimit({
    windowMs: 15 * 60 * 1000,
    max:      5,
    message:  'Too many login attempts. Please wait 15 minutes before trying again.',
    skipSuccessfulRequests: true,  // don't count successful logins
});

// โ”€โ”€ Registration limiter โ€” strict โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const registerLimiter = createRateLimit({
    windowMs: 60 * 60 * 1000,   // 1 hour
    max:      3,
    message:  'Too many accounts created from this IP. Please try again later.',
});

// โ”€โ”€ Password reset limiter โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const passwordResetLimiter = createRateLimit({
    windowMs: 60 * 60 * 1000,   // 1 hour
    max:      3,
    message:  'Too many password reset requests. Please try again in an hour.',
});

// โ”€โ”€ General API limiter โ€” moderate โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const apiLimiter = createRateLimit({
    windowMs: 60 * 1000,   // 1 minute
    max:      100,
    message:  'Too many requests from this IP. Please slow down.',
});

// โ”€โ”€ Upload limiter โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const uploadLimiter = rateLimit({
    windowMs: 60 * 60 * 1000,  // 1 hour
    max:      20,
    message:  { success: false, message: 'Upload limit reached. Try again in an hour.' },
    statusCode: 429,
    keyGenerator: req => req.user?.id || req.ip,  // limit per user when authenticated
});

// โ”€โ”€ User-based limiter for authenticated routes โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
const userLimiter = rateLimit({
    windowMs: 60 * 1000,
    max:      200,
    keyGenerator: req => req.user?.id || req.ip,  // per user, not per IP
    message:     { success: false, message: 'Rate limit exceeded. Please slow down.' },
    skip:         req => process.env.NODE_ENV === 'test',  // skip in tests
});

// โ”€โ”€ Applying limiters in routes โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// app.js
app.use('/api/v1', apiLimiter);             // apply to all API routes

// auth.routes.js
router.post('/login',          authLimiter,          authController.login);
router.post('/register',       registerLimiter,       authController.register);
router.post('/forgot-password', passwordResetLimiter, authController.forgotPassword);

// task.routes.js
router.use(userLimiter);                    // per-user limit on task routes
router.post('/:id/attachments', uploadLimiter, taskController.uploadAttachment);

Progressive Delay with account-level blocking

// For even stronger brute-force protection: track failed attempts per account
// middleware/loginProtection.js

const User       = require('../models/user.model');
const asyncHandler = require('../utils/asyncHandler');

const MAX_ATTEMPTS   = 5;
const LOCKOUT_HOURS  = 2;

exports.checkLoginAttempts = asyncHandler(async (req, res, next) => {
    const { email } = req.body;
    if (!email) return next();

    const user = await User.findOne({ email }).select('+loginAttempts +lockUntil');
    if (!user) return next();   // don't leak whether account exists

    // Account is locked
    if (user.lockUntil && user.lockUntil > Date.now()) {
        const minutesLeft = Math.ceil((user.lockUntil - Date.now()) / 60000);
        return res.status(429).json({
            success: false,
            message: `Account locked. Try again in ${minutesLeft} minutes.`,
        });
    }

    // Reset expired lockout
    if (user.lockUntil && user.lockUntil <= Date.now()) {
        await User.updateOne({ _id: user._id }, {
            $set:   { loginAttempts: 0 },
            $unset: { lockUntil: '' },
        });
    }

    next();
});

exports.recordFailedLogin = asyncHandler(async (req, res, next) => {
    const { email } = req.body;
    if (!email) return next();

    const user = await User.findOne({ email }).select('+loginAttempts +lockUntil');
    if (!user) return next();

    const attempts = (user.loginAttempts || 0) + 1;
    const update   = { loginAttempts: attempts };

    if (attempts >= MAX_ATTEMPTS) {
        update.lockUntil = new Date(Date.now() + LOCKOUT_HOURS * 60 * 60 * 1000);
    }

    await User.updateOne({ _id: user._id }, { $set: update });
    next();
});

// auth.routes.js
router.post('/login',
    authLimiter,
    loginProtection.checkLoginAttempts,
    authController.login
);

How It Works

Step 1 โ€” Rate Limiter Counts Requests per Key per Window

The middleware maintains a counter for each key (by default, the client’s IP address). For every incoming request, it increments the counter and compares it to the max limit. If the counter exceeds the limit, the middleware returns a 429 response and does not call next() โ€” the route handler never runs. When the time window expires, the counter resets to zero.

Step 2 โ€” RateLimit Headers Signal Limits to Clients

With standardHeaders: 'draft-7', Express adds RateLimit-Limit, RateLimit-Remaining, and RateLimit-Reset headers to every response. Well-behaved API clients (like Angular interceptors) can read these headers and back off before hitting the limit rather than receiving a 429. The reset header tells the client exactly when to retry.

Step 3 โ€” skipSuccessfulRequests Focuses on Failed Attempts

Setting skipSuccessfulRequests: true on the login limiter means only failed login attempts count toward the limit. A user who logs in successfully on the first try does not consume their limit at all. This is appropriate for login protection โ€” you want to block attackers who are guessing passwords (which fail), not penalise users who log in normally.

Step 4 โ€” Redis Store Enables Multi-Instance Rate Limiting

With a Redis store, all Node.js instances share the same rate limit counters. A client routed to instance A and then instance B accumulates a combined count. Without shared storage, each instance has its own memory counter โ€” a client can send 100 requests to instance A and 100 to instance B, effectively bypassing a 100-request limit. Redis makes the limit effective across the entire cluster.

Step 5 โ€” Account-Level Lockout Complements IP Rate Limiting

IP-based rate limiting blocks from a specific IP, but attackers can rotate IPs (botnets, VPNs, proxies). Account-level lockout tracks failed attempts per username and locks the account regardless of which IP the attempts come from. The two approaches complement each other: IP rate limiting stops volume attacks from a single source; account lockout stops distributed attacks targeting one account.

Common Mistakes

Mistake 1 โ€” Not configuring trust proxy โ€” req.ip is always the load balancer IP

โŒ Wrong โ€” rate limit applies to load balancer, not actual clients:

// Without trust proxy, behind nginx: req.ip = '10.0.0.1' (nginx internal IP)
// All users share the same "IP" โ€” one client blocks everyone
const limiter = rateLimit({ windowMs: 60000, max: 100 });

✅ Correct โ€” trust proxy to get real client IP from X-Forwarded-For:

app.set('trust proxy', 1);  // trust first hop (nginx/load balancer)
// Now req.ip = actual client IP from X-Forwarded-For

Mistake 2 โ€” Using MemoryStore in multi-instance deployment

โŒ Wrong โ€” each instance tracks counts separately:

const limiter = rateLimit({ windowMs: 60000, max: 100 });
// 3 instances โ†’ client can actually make 300 requests (100 per instance)

✅ Correct โ€” use Redis store for shared counters:

const limiter = rateLimit({ windowMs: 60000, max: 100, store: redisStore });
// All instances share the counter โ€” true 100 req/min limit

Mistake 3 โ€” Same limit for all routes โ€” blocking legitimate API users

โŒ Wrong โ€” same strict login limit applied to all endpoints:

app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 5 }));
// 5 requests per 15 min for EVERYTHING โ€” users can barely browse the app

✅ Correct โ€” differentiated limits per route category:

app.use('/api/v1', rateLimit({ windowMs: 60000, max: 100 }));   // general API
router.post('/login', rateLimit({ windowMs: 900000, max: 5 }));  // auth only

Quick Reference

Route Category Suggested Limit Window
Login / Auth 5 requests 15 minutes
Registration 3 requests 1 hour
Password reset 3 requests 1 hour
File upload 20 requests 1 hour
General API 100 requests 1 minute
Public search 30 requests 1 minute
Admin API 500 requests 1 minute

🧠 Test Yourself

Your Express API is deployed with 3 Node.js instances behind a load balancer. You set a rate limit of 100 requests per minute using the default MemoryStore. What is the actual maximum number of requests a single client can make per minute?