⚡ Expert Express.js Interview Questions
This lesson targets senior engineers and architects. Topics include the Node.js event loop, clustering, OpenTelemetry, Server-Sent Events, microservices patterns, Express internals, TypeScript with Express, performance profiling, circuit breakers, and Express 5. These questions reveal whether you understand Express deeply or just use it.
Questions & Answers
01 How does the Node.js event loop affect Express performance? ►
Internals Node.js uses a single-threaded event loop. Express runs on top of this event loop โ meaning a single CPU-intensive or blocking synchronous operation in a route handler blocks the entire server from processing other requests.
The event loop phases (simplified):
- Timers โ I/O callbacks โ Poll (wait for I/O) โ Check (setImmediate) โ Close callbacks โ repeat
// โ BLOCKING โ halts the entire server for 2 seconds
app.get('/blocking', (req, res) => {
const start = Date.now();
while (Date.now() - start < 2000) {} // busy wait โ blocks event loop
res.send('done');
});
// โ CPU-intensive โ bcrypt hash with high rounds in a route handler
app.post('/register', async (req, res) => {
const hash = await bcrypt.hash(password, 14); // blocks event loop per iteration
});
// โ
Offload to worker threads for CPU-heavy work
const { Worker } = require('worker_threads');
app.post('/process', (req, res) => {
const worker = new Worker('./workers/imageProcessor.js', { workerData: req.body });
worker.on('message', result => res.json(result));
worker.on('error', next);
});
// โ
Use async I/O โ file reads, DB queries are non-blocking
app.get('/data', async (req, res) => {
const data = await db.find(); // releases event loop while waiting for DB
res.json(data);
});
Rule: Keep synchronous code in route handlers as fast as possible. Offload CPU-intensive work to worker threads or a separate queue-based service.
02 What is Node.js clustering and how do you scale an Express app with it? ►
Scaling Node.js runs on a single CPU core. The built-in cluster module spawns multiple worker processes (one per CPU core), each running the Express app โ multiplying throughput on multi-core machines.
const cluster = require('cluster');
const os = require('os');
if (cluster.isPrimary) {
const numCPUs = os.cpus().length;
console.log(`Master ${process.pid} starting ${numCPUs} workers`);
for (let i = 0; i < numCPUs; i++) cluster.fork();
cluster.on('exit', (worker, code) => {
console.log(`Worker ${worker.process.pid} died. Restarting...`);
cluster.fork(); // auto-restart on crash
});
} else {
// Each worker runs the full Express app
const app = require('./app');
app.listen(3000, () => console.log(`Worker ${process.pid} listening`));
}
// Production alternative: PM2 (handles clustering, restarts, monitoring)
// pm2 start app.js -i max # spin up one process per CPU
Clustering considerations:
- Workers don’t share memory โ in-memory state (sessions, rate limit counters) must use Redis
- Sticky sessions required if using WebSockets or session-based auth without Redis
- In containerised environments (Docker/Kubernetes), horizontal pod scaling is preferred over clustering โ run one process per container and let Kubernetes manage replicas
03 What are Server-Sent Events (SSE) and how do you implement them in Express? ►
Realtime Server-Sent Events (SSE) are a W3C standard for one-way, server-to-client real-time data streaming over HTTP. Unlike WebSockets, SSE is unidirectional and uses plain HTTP โ simpler to implement, proxy-friendly, and auto-reconnects.
// Express SSE endpoint
app.get('/events', (req, res) => {
// Required SSE headers
res.set({
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
'X-Accel-Buffering': 'no' // disable Nginx buffering
});
res.flushHeaders();
// Send a comment to keep the connection alive (every 30s)
const heartbeat = setInterval(() => res.write(': ping\n\n'), 30000);
// Send a named event with JSON data
const sendEvent = (event, data) => {
res.write(`event: ${event}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
};
// Example: push real-time notifications
const interval = setInterval(() => {
sendEvent('notification', { id: Date.now(), message: 'New order placed' });
}, 5000);
// Clean up when client disconnects
req.on('close', () => {
clearInterval(interval);
clearInterval(heartbeat);
console.log('SSE client disconnected');
});
});
// Client-side (browser)
// const es = new EventSource('/events');
// es.addEventListener('notification', e => console.log(JSON.parse(e.data)));
04 What is OpenTelemetry and how do you instrument an Express app? ►
Observability OpenTelemetry (OTel) is the CNCF standard for distributed tracing, metrics, and logs. It provides vendor-neutral instrumentation โ collect once, send to any backend (Jaeger, Datadog, Honeycomb, New Relic).
npm install @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-otlp-grpc
// instrumentation.js โ MUST be loaded BEFORE app.js
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-otlp-grpc');
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({ url: 'http://otel-collector:4317' }),
instrumentations: [getNodeAutoInstrumentations()] // auto-instruments Express, HTTP, MongoDB, Redis
});
sdk.start();
// Start with: node -r ./instrumentation.js server.js
// Or: NODE_OPTIONS="--require ./instrumentation.js" node server.js
// Manual spans for custom business logic
const { trace } = require('@opentelemetry/api');
const tracer = trace.getTracer('order-service');
app.post('/orders', authenticate, async (req, res, next) => {
const span = tracer.startSpan('create-order');
try {
span.setAttributes({ 'user.id': req.user.id, 'order.items': req.body.items.length });
const order = await orderService.create(req.body, req.user.id);
span.setStatus({ code: SpanStatusCode.OK });
res.status(201).json(order);
} catch (err) {
span.recordException(err);
span.setStatus({ code: SpanStatusCode.ERROR });
next(err);
} finally { span.end(); }
});
05 What is Express 5 and what are its major changes from Express 4? ►
Express 5 Express 5 (released as stable in 2024 after many years in alpha) introduces several breaking changes and improvements over Express 4.
Key changes in Express 5:
- Native async error handling โ rejected Promises and thrown errors in async route handlers are automatically forwarded to the error handler. No more
try/catch + next(err)boilerplate. - Path matching changes โ the path-to-regexp library was updated.
?,+,*now behave differently in route patterns. - req.query is now a getter โ the query string is parsed on access, not upfront.
- Removed deprecated APIs โ
res.json(obj, status)removed (useres.status(n).json(obj));app.param(callback)removed;req.param()removed. - Loopback redirect removed โ
res.redirect('back')removed (useres.redirect(req.get('Referer') || '/')).
// Express 4 โ async errors NOT caught automatically
app.get('/users', async (req, res, next) => {
try {
const users = await User.find();
res.json(users);
} catch (err) { next(err); } // required in Express 4
});
// Express 5 โ async errors caught AUTOMATICALLY
app.get('/users', async (req, res) => {
const users = await User.find(); // if this throws, Express 5 catches it
res.json(users);
// No try/catch or next(err) needed
});
Migrate with: npm install express@5. Most Express 4 apps require only minor changes.
06 How do you build Express APIs with TypeScript? ►
TypeScript TypeScript adds type safety to Express โ catching mistyped property names, missing middleware, and incorrect response shapes at compile time.
npm install express @types/express typescript ts-node-dev
// Typed route handler
import { Request, Response, NextFunction, RequestHandler } from 'express';
interface CreateUserBody { name: string; email: string; age?: number; }
interface UserParams { id: string; }
// Typed request generics: Request<Params, ResBody, ReqBody, Query>
const createUser: RequestHandler<{}, User, CreateUserBody> = async (req, res, next) => {
const { name, email } = req.body; // TypeScript knows these are strings
try {
const user = await User.create({ name, email });
res.status(201).json(user);
} catch (err) { next(err); }
};
// Augment the Request type to include custom properties
declare global {
namespace Express {
interface Request {
user?: { id: string; role: 'admin' | 'user' };
requestId?: string;
}
}
}
// Now req.user is typed everywhere after authenticate middleware sets it
const authenticate: RequestHandler = (req, res, next) => {
const payload = jwt.verify(getToken(req), secret) as JwtPayload;
req.user = { id: payload.sub as string, role: payload.role };
next();
};
app.get('/profile', authenticate, (req, res) => {
res.json({ id: req.user!.id }); // TypeScript knows req.user is set
});
07 What is the circuit breaker pattern and how do you implement it in Express? ►
Resilience The circuit breaker pattern prevents cascading failures in distributed systems. When a downstream service is failing, the circuit “opens” โ immediately rejecting calls instead of waiting for timeouts. This protects the calling service and allows the downstream service time to recover.
npm install opossum
const CircuitBreaker = require('opossum');
// Wrap a potentially failing function
async function callPaymentService(payload) {
const response = await fetch('http://payment-service/charge', {
method: 'POST',
body: JSON.stringify(payload),
signal: AbortSignal.timeout(3000) // 3s timeout
});
if (!response.ok) throw new Error(`Payment service error: ${response.status}`);
return response.json();
}
const breaker = new CircuitBreaker(callPaymentService, {
timeout: 3000, // fail if function takes longer than 3s
errorThresholdPercentage: 50, // open circuit if 50% of calls fail
resetTimeout: 10000, // after 10s, try again (half-open state)
volumeThreshold: 5 // min 5 calls before tripping
});
// Fallback when circuit is open
breaker.fallback(() => ({ success: false, error: 'Payment service unavailable' }));
// Circuit state events
breaker.on('open', () => logger.warn('Payment circuit OPEN'));
breaker.on('halfOpen', () => logger.info('Payment circuit HALF-OPEN โ testing'));
breaker.on('close', () => logger.info('Payment circuit CLOSED โ recovered'));
// Use in a route
app.post('/checkout', authenticate, async (req, res, next) => {
try {
const result = await breaker.fire(req.body.payment);
res.json(result);
} catch (err) { next(err); }
});
08 How do you implement idempotency for POST requests in Express? ►
API Design Idempotency ensures that sending the same request multiple times produces the same result as sending it once โ preventing duplicate orders, payments, or records when clients retry due to network failures.
// Client sends a unique Idempotency-Key header with each POST
// If the key is seen again, return the original response
const idempotencyMiddleware = async (req, res, next) => {
if (req.method !== 'POST') return next();
const key = req.headers['idempotency-key'];
if (!key) return next(); // no key โ proceed normally
// Check Redis for an existing response
const cached = await redis.get(`idempotency:${key}`);
if (cached) {
const { status, body } = JSON.parse(cached);
return res.status(status).json(body); // return cached response
}
// Intercept the response to cache it
const originalJson = res.json.bind(res);
res.json = (body) => {
// Cache for 24 hours only if the response was successful
if (res.statusCode < 400) {
redis.setex(`idempotency:${key}`, 86400, JSON.stringify({ status: res.statusCode, body }));
}
return originalJson(body);
};
next();
};
app.use('/api/payments', idempotencyMiddleware);
app.use('/api/orders', idempotencyMiddleware);
// Client usage:
// POST /api/payments
// Idempotency-Key: a4e3f9b2-5c1d-4a8e-b3f7-2d9e1c4a7b8f
// (retry this exact request safely if network fails)
09 What is the Saga pattern and how does it apply to Express microservices? ►
Microservices The Saga pattern manages distributed transactions across multiple microservices where ACID transactions are not available. Each service completes its local transaction and publishes an event. If any step fails, compensating transactions are triggered to undo the completed steps.
Choreography-based Saga (event-driven, no central coordinator):
// Order service โ step 1
app.post('/orders', async (req, res) => {
const order = await Order.create({ ...req.body, status: 'pending' });
await messageBus.publish('order.created', { orderId: order._id, userId: req.user.id });
res.status(202).json({ orderId: order._id, status: 'processing' });
});
// Inventory service โ step 2 โ listens for order.created
messageBus.subscribe('order.created', async ({ orderId, items }) => {
try {
await Inventory.reserve(items);
await messageBus.publish('inventory.reserved', { orderId });
} catch (err) {
// Compensating transaction โ cancel the order
await messageBus.publish('inventory.failed', { orderId, reason: err.message });
}
});
// Order service โ handles failure โ compensating transaction
messageBus.subscribe('inventory.failed', async ({ orderId }) => {
await Order.updateOne({ _id: orderId }, { $set: { status: 'cancelled' } });
await messageBus.publish('order.cancelled', { orderId });
});
Sagas trade atomicity for availability. Each step must be idempotent and each service must handle compensating transactions. Use a message broker (RabbitMQ, Kafka, Redis Streams) for reliable event delivery.
10 How do you profile and debug performance bottlenecks in an Express application? ►
Performance
1. Identify slow routes with response time logging:
app.use((req, res, next) => {
const start = process.hrtime.bigint();
res.on('finish', () => {
const ms = Number(process.hrtime.bigint() - start) / 1e6;
if (ms > 500) logger.warn({ method: req.method, url: req.url, ms }, 'Slow request');
});
next();
});
2. CPU profiling with Node’s built-in profiler:
# Start with V8 profiler node --prof server.js # Run load test: autocannon -c 100 -d 30 http://localhost:3000/api/endpoint # Process the profile node --prof-process isolate-*.log > profile.txt
3. Heap memory profiling:
node --inspect server.js # Open chrome://inspect in Chrome # Take heap snapshots before/after load to find memory leaks # Look for growing object counts between snapshots
4. Load testing with autocannon:
npx autocannon -c 100 -d 30 -p 10 http://localhost:3000/api/users # -c 100 concurrent connections, -d 30s duration, -p 10 pipelining # Reports: req/sec, latency percentiles (p50, p97, p99), errors
5. Common culprits: missing database indexes, N+1 query patterns, synchronous file reads in hot paths, missing caching on expensive computations, large JSON payloads without compression.
11 What is the BFF (Backend For Frontend) pattern and how do you implement it with Express? ►
Architecture The Backend For Frontend pattern creates a dedicated backend service for each client type (web, mobile, TV). Each BFF aggregates and transforms data from multiple microservices into the exact shape each client needs โ reducing over-fetching and under-fetching.
// Web BFF โ aggregates data from 3 microservices for the dashboard page
app.get('/dashboard', authenticate, async (req, res, next) => {
try {
// Parallel requests to downstream services
const [user, orders, recommendations] = await Promise.all([
fetch(`http://user-service/users/${req.user.id}`).then(r => r.json()),
fetch(`http://order-service/orders?userId=${req.user.id}&limit=5`).then(r => r.json()),
fetch(`http://recommendation-service/recs/${req.user.id}`).then(r => r.json())
]);
// Shape the response specifically for the web dashboard โ no unused fields
res.json({
profile: { name: user.name, avatar: user.avatarUrl },
recentOrders: orders.items.map(o => ({ id: o._id, status: o.status, total: o.total })),
forYou: recommendations.slice(0, 6).map(r => ({ id: r.productId, title: r.title }))
});
} catch (err) { next(err); }
});
// Mobile BFF โ smaller payload, different fields, pagination
app.get('/mobile/dashboard', authenticate, async (req, res, next) => {
// Returns a leaner payload optimised for mobile bandwidth
});
12 How do you implement request deduplication in Express? ►
Patterns Request deduplication prevents the same expensive request from being processed multiple times concurrently. When N clients request the same resource simultaneously, only one database/API call is made and the result is shared with all waiters.
// In-flight request cache (in-memory โ works for single process)
const inflight = new Map();
function deduplicateMiddleware(keyFn) {
return async (req, res, next) => {
const key = keyFn(req); // e.g., req.url, or `user:${req.params.id}`
if (inflight.has(key)) {
// Another request for the same key is in flight โ wait for its result
try {
const cached = await inflight.get(key);
return res.json(cached);
} catch (err) { return next(err); }
}
// First request โ create a promise and store it
let resolve, reject;
const promise = new Promise((res, rej) => { resolve = res; reject = rej; });
inflight.set(key, promise);
// Intercept the response
const originalJson = res.json.bind(res);
res.json = (body) => {
resolve(body); // wake up all waiters
inflight.delete(key);
return originalJson(body);
};
res.on('error', (err) => { reject(err); inflight.delete(key); });
next();
};
}
app.get('/products/:id', deduplicateMiddleware(req => `product:${req.params.id}`), getProduct);
13 What are Express internals โ how does routing work under the hood? ►
Internals Understanding Express internals helps debug complex routing issues and build better frameworks on top of it.
Core Express objects:
- Application (
app) โ a function (!) that is itself a validhttp.RequestListener.app.listen()is shorthand forhttp.createServer(app).listen(). - Router โ the core routing object.
apphas its own main Router instance.express.Router()creates a sub-Router. - Layer โ internal wrapper for each middleware/route. Stores the path pattern (compiled to regex), the handler function, and matching metadata.
- Route โ a special Layer that contains a stack of handlers for a specific path, one per HTTP method.
// What app.get('/users', handler) does internally:
// 1. Creates a new Route for path '/users'
// 2. Wraps Route in a Layer with regex = /^\/users\/?$/i
// 3. Pushes Layer onto router.stack[]
// 4. When a request arrives, router processes stack top to bottom
// 5. For each Layer: test regex against req.path
// 6. If match: call the handler(s) via Layer.handle()
// Inspect Express's internal stack
app._router.stack.forEach(layer => {
if (layer.route) {
const methods = Object.keys(layer.route.methods).join(', ').toUpperCase();
console.log(`${methods} ${layer.route.path}`);
} else {
console.log(`Middleware: ${layer.name}`);
}
});
14 How do you implement multi-tenancy in an Express API? ►
Architecture Multi-tenancy serves multiple customers (tenants) from a shared codebase and infrastructure, with strict data isolation between tenants.
Tenant identification strategies:
// 1. Subdomain โ acme.myapi.com, globex.myapi.com
const tenantMiddleware = async (req, res, next) => {
const host = req.hostname; // "acme.myapi.com"
const slug = host.split('.')[0]; // "acme"
const tenant = await Tenant.findOne({ slug });
if (!tenant) return res.status(404).json({ error: 'Tenant not found' });
req.tenant = tenant;
next();
};
// 2. Path prefix โ /api/acme/users, /api/globex/users
app.use('/api/:tenantSlug', async (req, res, next) => {
req.tenant = await Tenant.findOne({ slug: req.params.tenantSlug });
if (!req.tenant) return res.status(404).json({ error: 'Not found' });
next();
});
// 3. JWT claim โ tenant ID embedded in the token
const authenticate = (req, res, next) => {
const payload = jwt.verify(getToken(req), process.env.JWT_SECRET);
req.user = payload;
req.tenant = { id: payload.tenantId }; // from token
next();
};
// Data access โ always filter by tenantId
const getUsersForTenant = (tenantId) =>
User.find({ tenantId }); // shared collection, tenant-filtered
// Or: separate database per tenant (highest isolation)
const getDb = (tenantId) =>
mongoose.connection.useDb(`tenant_${tenantId}`);
15 How do you implement content negotiation in Express? ►
API Design Content negotiation allows a single endpoint to respond in different formats (JSON, XML, CSV) based on the client’s Accept header โ enabling the same API to serve web clients, mobile apps, and data pipelines.
const js2xmlparser = require('js2xmlparser');
const { stringify: csvStringify } = require('csv-stringify/sync');
app.get('/api/reports', authenticate, async (req, res, next) => {
try {
const data = await Report.find({ userId: req.user.id }).lean();
// res.format() selects handler based on Accept header
res.format({
'application/json': () => {
res.json(data);
},
'application/xml': () => {
res.set('Content-Type', 'application/xml');
res.send(js2xmlparser.parse('reports', data));
},
'text/csv': () => {
const csv = csvStringify(data, { header: true });
res.set({ 'Content-Type': 'text/csv', 'Content-Disposition': 'attachment; filename="reports.csv"' });
res.send(csv);
},
default: () => {
res.status(406).json({ error: 'Not Acceptable. Supported: application/json, application/xml, text/csv' });
}
});
} catch (err) { next(err); }
});
// Client examples:
// curl -H "Accept: application/json" /api/reports
// curl -H "Accept: text/csv" /api/reports
// curl -H "Accept: application/xml" /api/reports
16 What is tRPC and how does it compare to REST with Express? ►
API Design tRPC (TypeScript Remote Procedure Call) allows you to build fully type-safe APIs without code generation or schemas โ the server’s TypeScript types are directly inferred by the client. It integrates with Express as an adapter.
npm install @trpc/server @trpc/client zod
// tRPC router (server)
import { initTRPC } from '@trpc/server';
import { z } from 'zod';
const t = initTRPC.create();
export const appRouter = t.router({
getUser: t.procedure
.input(z.object({ id: z.string() }))
.query(async ({ input }) => {
return User.findById(input.id); // return type is inferred by TypeScript
}),
createUser: t.procedure
.input(z.object({ name: z.string(), email: z.string().email() }))
.mutation(async ({ input }) => {
return User.create(input);
}),
});
// Mount on Express
import * as trpcExpress from '@trpc/server/adapters/express';
app.use('/trpc', trpcExpress.createExpressMiddleware({ router: appRouter }));
// Client โ types are AUTOMATICALLY inferred from the server router
const user = await trpc.getUser.query({ id: '123' });
// TypeScript knows the exact shape of 'user' without any extra work
REST vs tRPC: REST is the standard for public APIs (language-agnostic, documentation tools). tRPC is ideal for internal TypeScript monorepos where client and server share a codebase โ eliminating the entire API contract definition layer.
17 How do you handle distributed tracing across microservices in Express? ►
Observability Distributed tracing follows a request as it flows through multiple microservices. Each service propagates trace context headers so the entire journey can be visualised in a tracing tool (Jaeger, Zipkin, Datadog).
// Propagate trace context manually (W3C Trace Context standard)
// Request A โ calls Service B โ which calls Service C
// Service A โ generate trace ID and pass it forward
app.use((req, res, next) => {
req.traceId = req.headers['traceparent'] || generateTraceId();
res.set('X-Trace-Id', req.traceId);
next();
});
// When calling downstream services โ propagate context
async function callServiceB(path, body, req) {
return fetch(`http://service-b${path}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'traceparent': req.traceId, // W3C Trace Context
'X-Request-Id': req.id, // Request correlation ID
'Authorization': req.headers.authorization // forward auth token
},
body: JSON.stringify(body)
});
}
// With OpenTelemetry (recommended) โ automatic context propagation
// The auto-instrumentation handles this automatically โ no manual headers needed
// Just initialise the OTel SDK in each service and it handles propagation
Use correlation IDs (simpler) when you need to link logs across services. Use OpenTelemetry (full solution) when you need visualised call graphs, latency breakdowns, and error attribution across your service mesh.
18 What are the differences between REST, GraphQL, and gRPC for Express-based APIs? ►
API Design
- REST (express + JSON) โ resource-based, HTTP verbs, human-readable JSON. Best for: public APIs, simple CRUD, teams unfamiliar with other options. Disadvantages: over-fetching (too many fields), under-fetching (multiple requests needed), no strict contract.
- GraphQL (apollo-server-express) โ client specifies exactly what data it needs in a single request. Best for: complex frontends with varied data needs, mobile apps (bandwidth-sensitive), rapid iteration without API versioning. Disadvantages: complex caching, N+1 query problem (mitigate with DataLoader), learning curve.
- gRPC (Express via grpc-js) โ binary Protocol Buffers, strongly typed contracts, HTTP/2, bi-directional streaming. Best for: internal microservice communication where performance is critical, strict API contracts across teams, polyglot environments. Disadvantages: not browser-native (requires gRPC-web proxy), less human-readable than JSON.
// GraphQL with Express
const { ApolloServer } = require('@apollo/server');
const { expressMiddleware } = require('@apollo/server/express4');
const server = new ApolloServer({ typeDefs, resolvers });
await server.start();
app.use('/graphql', cors(), express.json(), expressMiddleware(server, {
context: async ({ req }) => ({ user: await getUser(req.headers.authorization) })
}));
19 How do you implement an outbox pattern for reliable event publishing in Express? ►
Patterns The Transactional Outbox pattern ensures that database changes and message publishing are atomic โ preventing lost events when the message broker is unavailable or the service crashes between the DB write and the publish.
// Without outbox โ NOT atomic: DB write and message publish can diverge
app.post('/orders', async (req, res) => {
const order = await Order.create(req.body); // DB write
await messageBus.publish('order.created', order); // what if this fails? โ
res.status(201).json(order);
});
// WITH outbox โ atomic: write event to DB in the same transaction
app.post('/orders', async (req, res, next) => {
const session = await mongoose.startSession();
session.startTransaction();
try {
const order = await Order.create([req.body], { session });
// Write the event to the outbox collection in the SAME transaction
await OutboxEvent.create([{
type: 'order.created',
payload: order[0].toObject(),
status: 'pending',
createdAt: new Date()
}], { session });
await session.commitTransaction();
res.status(201).json(order[0]);
} catch (err) {
await session.abortTransaction();
next(err);
} finally { session.endSession(); }
});
// Outbox poller โ separate process publishes pending events to message broker
// Uses Change Streams or a polling loop to detect new outbox entries
// Marks them as 'published' after successful delivery
20 How do you architect an Express application for zero-downtime deployments? ►
Ops Zero-downtime deployment means users see no errors or interruptions when a new version of the application is deployed.
Key techniques:
- Graceful shutdown โ stop accepting new connections, complete in-flight requests, then exit. (Covered in Advanced lesson โ essential foundation.)
- Rolling deployments โ Kubernetes gradually replaces old pods with new ones. At no point are all instances down simultaneously. Requires readiness probes.
- Health endpoints โ required for load balancers and Kubernetes to route traffic correctly.
- Feature flags โ deploy code dark (disabled), then enable for specific users/percentages. Allows instant rollback without redeployment.
- Database migration compatibility โ deploy the new schema in phases: add columns without removing old ones, deploy code that works with both schemas, then remove old columns in a second deployment.
// Health check endpoints (required for Kubernetes)
app.get('/health/live', (req, res) => {
res.json({ status: 'ok', uptime: process.uptime() });
});
app.get('/health/ready', async (req, res) => {
try {
await mongoose.connection.db.admin().ping(); // verify DB connection
await redis.ping(); // verify Redis
res.json({ status: 'ready' });
} catch (err) {
res.status(503).json({ status: 'not ready', error: err.message });
}
});
// Kubernetes liveness probe: GET /health/live (restart if fails)
// Kubernetes readiness probe: GET /health/ready (stop sending traffic if fails)
21 What is the strangler fig pattern for migrating from a monolith to microservices? ►
Architecture The Strangler Fig pattern gradually migrates a monolith to microservices by routing specific endpoints to new services while the monolith still handles everything else. The new services “strangle” the monolith over time until it can be retired.
// Express API gateway routes new services, proxies rest to monolith
const { createProxyMiddleware } = require('http-proxy-middleware');
// New microservices โ fully migrated endpoints
app.use('/api/v2/payments', authenticate, paymentServiceProxy);
app.use('/api/v2/inventory', authenticate, inventoryServiceProxy);
// Partially migrated โ new service handles writes, monolith handles reads
app.post('/api/users', authenticate, createProxyMiddleware({ target: 'http://user-service:3001' }));
app.get('/api/users', authenticate, createProxyMiddleware({ target: 'http://legacy-monolith:8080' }));
// Everything else โ still on the monolith
app.use('/api', createProxyMiddleware({
target: 'http://legacy-monolith:8080',
changeOrigin: true,
on: {
error: (err, req, res) => res.status(502).json({ error: 'Gateway error' })
}
}));
Migration order (safest first):
- Extract stateless, well-isolated services first (auth, email, notifications)
- Extract read-heavy services (product catalogue, search) before write-heavy ones
- Extract services with the most independent teams last (core order management)
- Keep the database split decoupled from the service split โ migrate code first, then extract the database schema
📝 Knowledge Check
These questions mirror real senior-level Express.js architecture and internals interview scenarios.