V8 Internals — Hidden Classes, Inline Caches, and GC Optimisation

The V8 JavaScript engine compiles JavaScript to machine code using an optimising JIT compiler. The quality of that machine code — and how fast your application runs — depends on patterns in your code that V8 can and cannot optimise. Understanding hidden classes, inline caches, deoptimisation triggers, and the garbage collector’s behaviour allows you to write JavaScript that V8 compiles to near-native speed rather than interpreted fallback paths. This is not micro-optimisation folklore — these patterns have measurable impact on hot code paths in production Node.js servers.

V8 Optimisation Concepts

Concept What V8 Does What Breaks It
Hidden classes (shapes) Objects with the same property assignment order share a fast class — property access is an offset lookup, not a hash map Adding properties in different orders, deleting properties, mixing types
Inline caches (ICs) Property accesses are patched with the offset from the hidden class — subsequent accesses skip the lookup Polymorphic call sites (more than ~4 shapes hit same IC)
TurboFan compilation Hot functions are compiled to optimised machine code Deoptimisation triggers: arguments object, try/catch in hot paths, changing property types
Orinoco GC Generational, incremental, concurrent garbage collection Allocating many short-lived objects in tight loops, large retained heaps
Array packing Typed arrays (all same type, no holes) use fast element handling Holes in arrays ([1,,3]), mixing types, out-of-bounds writes
Note: Use node --trace-deopt app.js to log every deoptimisation event to the console, and node --trace-ic app.js to log inline cache state changes (monomorphic → polymorphic → megamorphic). In production, use the V8 sampling profiler (node --prof) and process the output with node --prof-process to find functions spending time in “bailout” (deoptimised) code. These are the highest-value targets for refactoring.
Tip: Initialise all object properties in the constructor in the same order, and set them to their final types immediately. A Task object that sometimes has dueDate: null and sometimes dueDate: Date creates two different hidden classes, preventing the inline cache from caching the property offset. Use dueDate: null always in the constructor and assign a Date later — V8 tracks the transition and can still optimise the monomorphic access if all objects follow the same transition path.
Warning: The delete operator degrades objects from fast properties to “slow” (dictionary mode). Once an object enters dictionary mode, every property access is a hash map lookup rather than a fast offset — this is 10–100x slower. Instead of deleting properties, set them to null or undefined: obj.field = null preserves the hidden class while communicating absence. Only use delete in non-hot code paths where the performance cost is acceptable.

Complete V8 Optimisation Examples

// ── Hidden classes — always initialise in the same order ──────────────────

// ❌ Creates two different hidden classes — polymorphic, harder to optimise
function createTaskBad(title, withDueDate) {
    const task = { title };
    if (withDueDate) {
        task.dueDate = new Date();  // added conditionally — different shape!
    }
    task.status = 'pending';
    return task;
}

// ✅ All tasks have the same hidden class — monomorphic, fast
function createTask(title, dueDate = null) {
    return {
        title,        // always present
        dueDate,      // always present (null if absent)
        status: 'pending',
        priority: 'medium',
    };
}

// ── Avoid delete — use null instead ──────────────────────────────────────
const task = createTask('Write tests');

// ❌ Degrades to dictionary mode
delete task.dueDate;

// ✅ Preserves hidden class
task.dueDate = null;

// ── Typed arrays for numeric data — much faster than JS arrays ────────────
// For operations on large numeric datasets (coordinates, timestamps, scores):

// ❌ Regular JS array — each element is a heap-allocated boxed double
const timestamps = new Array(100000).fill(Date.now());

// ✅ Float64Array — dense, unboxed, CPU-cacheable
const timestampsTyped = new Float64Array(100000);
for (let i = 0; i < 100000; i++) timestampsTyped[i] = Date.now();

// ✅ Int32Array for counters
const counters = new Int32Array(1024);  // 1024 int32 values in contiguous memory

// ── Avoiding deoptimisation in hot paths ─────────────────────────────────

// ❌ 'arguments' object prevents optimisation in older V8
function sumBad() {
    let total = 0;
    for (let i = 0; i < arguments.length; i++) total += arguments[i];
    return total;
}

// ✅ Rest parameters — optimisable
function sumGood(...nums) {
    return nums.reduce((a, b) => a + b, 0);
}

// ❌ try/catch blocks in hot functions prevent TurboFan optimisation
function parseJSONHot(str) {
    try {
        return JSON.parse(str);  // try/catch here prevents optimisation of whole function
    } catch {
        return null;
    }
}

// ✅ Isolate try/catch to a separate helper function
function safeParseJSON(str) {
    return tryParseJSON(str);  // hot caller stays optimisable
}
function tryParseJSON(str) {   // only this function is deoptimised — cold path
    try { return JSON.parse(str); } catch { return null; }
}

// ── Object pooling to reduce GC pressure ─────────────────────────────────
// For high-frequency short-lived objects (e.g. per-request context objects)
class ObjectPool {
    constructor(factory, resetFn, size = 100) {
        this._factory = factory;
        this._reset   = resetFn;
        this._pool    = Array.from({ length: size }, factory);
    }

    acquire() {
        return this._pool.pop() ?? this._factory();
    }

    release(obj) {
        this._reset(obj);
        this._pool.push(obj);
    }
}

const bufferPool = new ObjectPool(
    () => Buffer.allocUnsafe(1024),
    buf => buf.fill(0),
    50
);

// Use pooled buffer
const buf = bufferPool.acquire();
try {
    // use buf...
} finally {
    bufferPool.release(buf);
}

// ── Monomorphic call sites ─────────────────────────────────────────────────
// ❌ Polymorphic — different types passed to same function (3 shapes → megamorphic)
function getTitle(item) { return item.title; }
getTitle({ title: 'a', status: 'pending' });        // shape 1
getTitle({ title: 'b', user: '123', tags: [] });    // shape 2
getTitle({ title: 'c', priority: 'high' });          // shape 3
// V8 cannot cache — every call does property lookup

// ✅ Monomorphic — always called with same shape
class TaskView {
    constructor(title, status, priority) {
        this.title    = title;
        this.status   = status;
        this.priority = priority;
    }
}
function getTaskTitle(task) { return task.title; }
// All TaskView instances have same hidden class — IC always hits

How It Works

Step 1 — Hidden Classes Turn Property Access into Offset Arithmetic

V8 assigns a hidden class (internal “shape”) to each object based on which properties were added and in what order. Objects with the same hidden class store their properties at the same memory offsets. When V8 JIT-compiles a property access like task.title, it can compile it as a direct memory offset read — one instruction — rather than a hash map lookup. This only works if the object consistently has the same hidden class at that call site (monomorphic).

Step 2 — Inline Caches Cache the Last Seen Shape

An inline cache (IC) is a small piece of machine code patched into each property access site by the JIT compiler. On first access, V8 records the hidden class and the offset. On subsequent accesses with the same hidden class, the cache hits and the access is a direct offset load. If the call site sees multiple shapes (polymorphic), V8 tries a few shapes; beyond ~4 shapes (megamorphic), it falls back to a general hash map lookup and stops trying to cache.

Step 3 — TurboFan Compiles Hot Functions to Machine Code

Functions executed frequently (“hot”) are optimised by TurboFan, V8’s optimising JIT compiler. TurboFan produces highly optimised machine code based on type feedback — if a function has always received integers, it compiles integer-specific machine code. If a call later arrives with a float, V8 “deoptimises” back to the interpreter and collects new type feedback. Deoptimisation is expensive; avoid it by keeping argument types consistent across all calls to hot functions.

Step 4 — TypedArrays Use Unboxed Storage

JavaScript numbers in regular arrays are “boxed” — each is a heap-allocated object with a pointer. TypedArrays (Float64Array, Int32Array) store values as raw unboxed memory — a Float64Array of 1000 elements is a contiguous 8000-byte block. Operations on TypedArrays are more CPU-cache-friendly (linear memory layout) and SIMD-optimisable. For numeric-heavy code (statistics, signal processing, physics simulations), TypedArrays provide 2–10x speedups.

Step 5 — Object Pooling Reduces GC Pause Frequency

Creating and discarding many objects per request generates “garbage” that the GC must collect. Even Node.js’s incremental GC pauses event loop processing during major GCs. Pooling reusable objects (Buffers, context objects, temporary data structures) means fewer allocations — fewer GC cycles. The tradeoff is code complexity; pools are worth it for objects that are: frequently allocated, short-lived, and expensive to allocate (like Buffers).

Quick Reference

Optimisation Rule
Hidden classes Always initialise all properties in constructor, same order
Avoid delete Set to null instead of delete obj.prop
Monomorphic functions Always call a function with objects of the same shape
Numeric data Use Float64Array / Int32Array instead of Array
No arguments object Use rest parameters ...args instead
Isolate try/catch Put try/catch in separate helper, not in hot function
Trace deopt node --trace-deopt app.js
Object pooling Pool frequently allocated, short-lived objects

🧠 Test Yourself

A hot route handler uses delete req.body.password after logging in. After profiling, property accesses on req.body in downstream middleware are 50x slower than expected. What is the cause and fix?