Expert Python Interview Questions and Answers

๐Ÿ“‹ Table of Contents โ–พ
  1. Questions & Answers
  2. 📝 Knowledge Check

🐍 Expert Python Interview Questions

This lesson targets senior Python engineers and architects. Topics include CPython internals, the import system, coroutine internals, C extensions, Protocol Buffers, design patterns, performance profiling, the Python Data Model in depth, Pydantic, and production architecture patterns. These questions separate Python developers from Python experts.

Questions & Answers

01 How does Python’s import system work internally?

Internals When you write import foo, Python follows a specific resolution process:

  1. sys.modules cache โ€” Python first checks if the module is already imported. If found, returns the cached module immediately. This is why import is idempotent.
  2. Finders โ€” Python’s import machinery asks each finder in sys.meta_path if it can locate the module. Built-in finders handle: frozen modules, built-in extensions, and file-based modules.
  3. Loaders โ€” once a finder locates the module, a loader reads and executes the module code.
  4. sys.path โ€” the PathFinder searches each directory in sys.path for the module.
import sys

# Check what's already imported
print(list(sys.modules.keys())[:5])

# sys.path โ€” where Python looks for modules
print(sys.path)

# Manipulate sys.path at runtime (use sparingly)
sys.path.insert(0, "/path/to/my/modules")

# Custom importer โ€” intercept all imports
class AuditImporter:
    def find_spec(self, fullname, path, target=None):
        print(f"Importing: {fullname}")
        return None  # return None to let normal import proceed

sys.meta_path.insert(0, AuditImporter())
import json  # prints "Importing: json"

# importlib โ€” the public import API
import importlib
mod = importlib.import_module("json")
importlib.reload(mod)  # re-execute module code (rarely needed)

# __init__.py controls package initialisation
# __all__ controls what 'from package import *' exports
# __path__ is the list of directories a package searches

02 What are coroutines in depth? How do they differ from generators?

Async Internals Both coroutines and generators use yield under the hood, but they serve different purposes. A generator produces values; a coroutine suspends execution to wait for something.

# Generator โ€” produces values with yield
def gen():
    yield 1
    yield 2

# Coroutine (PEP 342) โ€” can both receive and send values
def accumulator():
    total = 0
    while True:
        x = yield total   # suspends AND receives
        total += x

acc = accumulator()
next(acc)      # prime it โ€” runs to first yield, returns 0
acc.send(10)   # resumes, x=10, loops, returns 10
acc.send(20)   # returns 30

# Native coroutine (async def) โ€” Python 3.5+
# Uses a different code object (CO_COROUTINE flag)
# Cannot be used with next() โ€” must be awaited
async def fetch():
    await asyncio.sleep(1)
    return "data"

# Under the hood: asyncio's event loop calls coroutine.send(None)
# When the coroutine yields (via await), control returns to the loop
# The loop schedules the coroutine to resume when its awaitable is ready

# Inspect coroutine state
import inspect
coro = fetch()
inspect.getcoroutinestate(coro)  # CORO_CREATED, CORO_RUNNING, CORO_SUSPENDED, CORO_CLOSED

# __await__ protocol โ€” any object with __await__ can be awaited
class Awaitable:
    def __await__(self):
        yield  # suspends; return value becomes the await result
        return 42

03 What is Python’s data model? Explain how operators and built-ins work.

Data Model Python’s data model defines how objects interact with the language’s built-in features. Every operator, function, and statement ultimately calls a dunder method on an object.

# Attribute access protocol
obj.attr        # type(obj).__getattribute__(obj, "attr")
obj.attr = val  # type(obj).__setattr__(obj, "attr", val)
del obj.attr    # type(obj).__delattr__(obj, "attr")

# Numeric coercion โ€” Python tries both sides
class Money:
    def __init__(self, amount):
        self.amount = amount

    def __add__(self, other):         # self + other
        if isinstance(other, Money):
            return Money(self.amount + other.amount)
        return NotImplemented          # signals Python to try reflected op

    def __radd__(self, other):        # other + self (reflected)
        return self.__add__(Money(other))

    def __iadd__(self, other):        # self += other (in-place)
        self.amount += other.amount if isinstance(other, Money) else other
        return self

# Context manager protocol
class Timed:
    def __enter__(self):
        self.start = time.perf_counter()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.duration = time.perf_counter() - self.start
        # return True to suppress the exception; False/None to propagate

# Iterator protocol โ€” makes ANY object iterable
class Range:
    def __init__(self, stop):
        self.current = 0
        self.stop = stop

    def __iter__(self): return self     # return the iterator (self or a new one)
    def __next__(self):                 # return next value or raise StopIteration
        if self.current >= self.stop:
            raise StopIteration
        val = self.current
        self.current += 1
        return val

04 How do you write a C extension for Python?

C Extensions C extensions let you write performance-critical code in C that is callable from Python. NumPy, Pandas, and many scientific libraries are C extensions.

/* fastmath.c โ€” simple C extension */
#define PY_SSIZE_T_CLEAN
#include <Python.h>

/* C implementation of dot product */
static PyObject* dot_product(PyObject* self, PyObject* args) {
    PyObject *list_a, *list_b;
    if (!PyArg_ParseTuple(args, "OO", &list_a, &list_b))
        return NULL;

    Py_ssize_t n = PyList_Size(list_a);
    double result = 0.0;
    for (Py_ssize_t i = 0; i < n; i++) {
        double a = PyFloat_AsDouble(PyList_GetItem(list_a, i));
        double b = PyFloat_AsDouble(PyList_GetItem(list_b, i));
        result += a * b;
    }
    return PyFloat_FromDouble(result);
}

static PyMethodDef FastMathMethods[] = {
    {"dot_product", dot_product, METH_VARARGS, "Compute dot product"},
    {NULL, NULL, 0, NULL}
};

static struct PyModuleDef fastmathmodule = {
    PyModuleDef_HEAD_INIT, "fastmath", NULL, -1, FastMathMethods
};

PyMODINIT_FUNC PyInit_fastmath(void) {
    return PyModule_Create(&fastmathmodule);
}
# setup.py
from setuptools import setup, Extension
setup(ext_modules=[Extension("fastmath", sources=["fastmath.c"])])

# Build: python setup.py build_ext --inplace
# Use:   import fastmath; fastmath.dot_product([1,2,3], [4,5,6])  # 32.0

Modern alternatives to raw C extensions:

  • Cython โ€” write Python-like code that compiles to C; gradual typing for performance gains
  • pybind11 / nanobind โ€” expose C++ classes to Python cleanly
  • ctypes / cffi โ€” call existing C/C++ libraries without writing extension modules
  • Numba โ€” JIT-compile numerical Python functions to LLVM without C code
05 What is Pydantic and how does it improve data validation?

Libraries Pydantic is a data validation library using Python type annotations. It is the foundation of FastAPI and is widely used for config parsing, API request/response validation, and data pipelines.

from pydantic import BaseModel, Field, field_validator, model_validator, EmailStr
from typing import Annotated
from datetime import datetime

class Address(BaseModel):
    street: str
    city:   str
    country: str = "UK"

class User(BaseModel):
    id:       int
    name:     str = Field(min_length=2, max_length=50)
    email:    EmailStr
    age:      Annotated[int, Field(ge=0, le=150)]   # ge=greater-equal, le=less-equal
    address:  Address
    tags:     list[str] = []
    created:  datetime = Field(default_factory=datetime.utcnow)

    @field_validator("name")
    @classmethod
    def name_must_not_be_profane(cls, v: str) -> str:
        if v.lower() in BLOCKED_WORDS:
            raise ValueError("Name contains disallowed word")
        return v.title()

    @model_validator(mode="after")
    def check_age_matches_date(self) -> "User":
        expected_year = datetime.utcnow().year - self.age
        if abs(self.created.year - expected_year) > 2:
            raise ValueError("Age and creation date mismatch")
        return self

# Automatic coercion โ€” "42" -> 42, "2026-01-01" -> datetime
user = User(id="42", name="alice", email="a@example.com", age="30",
            address={"street": "123 Main St", "city": "London"})

print(user.name)      # "Alice" (title-cased by validator)
print(user.id)        # 42 (int, not "42")
print(user.model_dump())  # dict
print(user.model_dump_json())  # JSON string

# Pydantic v2 โ€” written in Rust (via pydantic-core) โ€” 5-50x faster than v1

06 What are Python design patterns? Show the most important ones.

Architecture

# Singleton โ€” ensure only one instance exists
class DatabasePool:
    _instance = None

    def __new__(cls, *args, **kwargs):
        if not cls._instance:
            cls._instance = super().__new__(cls)
        return cls._instance

# Observer โ€” notify multiple subscribers of changes
class EventBus:
    def __init__(self):
        self._subscribers: dict[str, list] = defaultdict(list)

    def subscribe(self, event: str, callback):
        self._subscribers[event].append(callback)

    def publish(self, event: str, data=None):
        for cb in self._subscribers[event]:
            cb(data)

bus = EventBus()
bus.subscribe("user.created", send_welcome_email)
bus.subscribe("user.created", update_analytics)
bus.publish("user.created", user)

# Strategy โ€” swap algorithms at runtime
class Sorter:
    def __init__(self, strategy):
        self._strategy = strategy

    def sort(self, data):
        return self._strategy(data)

sorter = Sorter(sorted)
sorter.sort([3,1,2])

# Factory โ€” create objects without specifying exact class
def get_storage(backend: str):
    backends = {"s3": S3Storage, "local": LocalStorage, "gcs": GCSStorage}
    cls = backends.get(backend)
    if not cls: raise ValueError(f"Unknown backend: {backend}")
    return cls()

# Command โ€” encapsulate requests as objects (undo/redo)
class Command(Protocol):
    def execute(self) -> None: ...
    def undo(self) -> None: ...

07 How do you profile and optimise Python performance?

Performance

Step 1 โ€” Measure, then optimise (never guess):

import cProfile, pstats, timeit, line_profiler

# timeit โ€” measure small code snippets
timeit.timeit("sorted([3,1,2])", number=100_000)

# cProfile โ€” function-level profiling
cProfile.run("my_function()", sort="cumulative")

# From the command line
python -m cProfile -s cumtime my_script.py

# pstats โ€” analyse cProfile output
profiler = cProfile.Profile()
profiler.enable()
run_expensive_code()
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats("cumulative")
stats.print_stats(20)  # top 20 slowest functions

# line_profiler โ€” line-by-line timing (pip install line-profiler)
# @profile decorator on the function, then: kernprof -l -v script.py

Step 2 โ€” Common optimisations:

  • Use set for membership testing instead of list (O(1) vs O(n))
  • Use local variables inside tight loops (LOAD_FAST is faster than LOAD_GLOBAL)
  • Use str.join() instead of += for string concatenation in loops
  • Use collections.deque for queues (O(1) from both ends)
  • Use numpy for numerical operations (vectorised C code)
  • Cache expensive computations with functools.lru_cache / functools.cache
  • Use generators instead of lists for single-pass iteration
  • Avoid repeated attribute lookups in loops โ€” cache method = obj.method
  • Use PyPy for CPU-bound code (2-10x faster than CPython for pure Python loops)
08 What is Python’s __init_subclass__ and how is it used?

Advanced OOP __init_subclass__ is called on a class whenever it is subclassed. It is a class method hook that runs at class-definition time โ€” a lighter alternative to metaclasses for customising subclass creation.

class Plugin:
    """Auto-registry pattern โ€” subclasses register themselves automatically."""
    _registry: dict[str, type] = {}

    def __init_subclass__(cls, plugin_name: str | None = None, **kwargs):
        super().__init_subclass__(**kwargs)
        name = plugin_name or cls.__name__.lower()
        Plugin._registry[name] = cls
        print(f"Registered plugin: {name}")

    @classmethod
    def get(cls, name: str) -> type:
        if name not in cls._registry:
            raise ValueError(f"Unknown plugin: {name!r}")
        return cls._registry[name]

class JSONPlugin(Plugin, plugin_name="json"):
    def process(self, data): return json.dumps(data)

class CSVPlugin(Plugin, plugin_name="csv"):
    def process(self, data): return ",".join(str(v) for v in data)

# JSONPlugin and CSVPlugin are registered automatically at class definition time
plugin_class = Plugin.get("json")
plugin = plugin_class()

# __init_subclass__ vs metaclass:
# Use __init_subclass__ when you only need to react to subclass creation
# Use metaclass when you need to modify the class itself (change __dict__, attributes)

09 What are structural pattern matching (match/case) statements?

Python 3.10+ Structural pattern matching (PEP 634, Python 3.10+) is a powerful control flow feature that matches values against patterns โ€” far more expressive than if/elif chains.

command = {"action": "create", "type": "user", "data": {"name": "Alice"}}

match command:
    case {"action": "create", "type": "user", "data": {"name": str(name)}}:
        print(f"Creating user: {name}")

    case {"action": "delete", "id": int(uid)}:
        print(f"Deleting user ID: {uid}")

    case {"action": action} if action not in ("create", "delete"):
        raise ValueError(f"Unknown action: {action}")

    case _:    # wildcard โ€” matches anything
        print("Unrecognised command")

# Match on types and structures
def process(point):
    match point:
        case (0, 0):
            return "origin"
        case (x, 0):
            return f"on x-axis at {x}"
        case (0, y):
            return f"on y-axis at {y}"
        case (x, y):
            return f"at ({x}, {y})"

# Match on class instances (uses __match_args__)
from dataclasses import dataclass

@dataclass
class Point:
    x: float
    y: float

def classify(p):
    match p:
        case Point(x=0, y=0):    return "origin"
        case Point(x=x, y=0):    return f"on x-axis at {x}"
        case Point(x=0, y=y):    return f"on y-axis at {y}"
        case Point(x=x, y=y) if x == y:  return f"on diagonal at {x}"
        case Point():            return "general point"

10 What is Python’s Protocol and structural subtyping?

Type System Protocol (PEP 544, Python 3.8+) enables structural subtyping โ€” a class satisfies a Protocol if it has the required methods/attributes, regardless of inheritance. This is static duck typing.

from typing import Protocol, runtime_checkable

@runtime_checkable  # enables isinstance() checks
class Drawable(Protocol):
    def draw(self) -> None: ...
    def resize(self, factor: float) -> None: ...

class Circle:
    def draw(self) -> None: print("Drawing circle")
    def resize(self, factor: float) -> None: self.radius *= factor

class Square:
    def draw(self) -> None: print("Drawing square")
    def resize(self, factor: float) -> None: self.side *= factor

# Circle and Square don't inherit from Drawable โ€” but they satisfy it
def render_all(shapes: list[Drawable]) -> None:
    for shape in shapes:
        shape.draw()

shapes = [Circle(), Square()]  # no ABC, no inheritance โ€” just duck typing
render_all(shapes)             # โœ… type-checker is satisfied

isinstance(Circle(), Drawable)  # True (with @runtime_checkable)

# Protocol with class variables
class Configurable(Protocol):
    config_key: str  # class variable required
    def validate(self) -> bool: ...

# Generic Protocol
from typing import Generic, TypeVar
T_co = TypeVar("T_co", covariant=True)

class SupportsRead(Protocol[T_co]):
    def read(self) -> T_co: ...

11 What is Python’s TypeVar, Generic, and covariance/contravariance?

Type System

from typing import TypeVar, Generic, TypeVarTuple, ParamSpec

T = TypeVar("T")                  # basic TypeVar โ€” any type
T_num = TypeVar("T_num", int, float)  # constrained โ€” only int or float
T_co = TypeVar("T_co", covariant=True)   # covariant โ€” safe for read-only
T_contra = TypeVar("T_contra", contravariant=True)  # for write-only

# Generic class
class Stack(Generic[T]):
    def __init__(self) -> None:
        self._items: list[T] = []

    def push(self, item: T) -> None:
        self._items.append(item)

    def pop(self) -> T:
        return self._items.pop()

stack: Stack[int] = Stack()
stack.push(42)        # โœ…
stack.push("hello")   # type error โ€” mypy catches this

# Covariance โ€” if Dog <: Animal, then Box[Dog] <: Box[Animal]
# Safe for producers (read-only containers): ImmutableList[Dog] can be used as ImmutableList[Animal]
class ImmutableList(Generic[T_co]):
    def get(self, index: int) -> T_co: ...

# Contravariance โ€” if Dog <: Animal, then Sink[Animal] <: Sink[Dog]
# Safe for consumers (write-only): a function accepting Animal can accept Dog
class Sink(Generic[T_contra]):
    def consume(self, item: T_contra) -> None: ...

# ParamSpec โ€” capture function parameter types (Python 3.10+)
P = ParamSpec("P")
def logged(func: Callable[P, T]) -> Callable[P, T]:
    @functools.wraps(func)
    def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

12 What is CPython bytecode? How do you inspect it?

Internals CPython compiles Python source to bytecode (stored in .pyc files in __pycache__). The bytecode is executed by the CPython virtual machine โ€” a simple stack-based interpreter.

import dis

def factorial(n):
    if n <= 1: return 1
    return n * factorial(n - 1)

dis.dis(factorial)
# Output (Python 3.12):
#  RESUME           0
#  LOAD_FAST        0 (n)
#  LOAD_CONST       1 (1)
#  COMPARE_OP       1 (<=)
#  POP_JUMP_IF_FALSE
#  LOAD_CONST       1 (1)
#  RETURN_VALUE
#  LOAD_FAST        0 (n)
#  LOAD_GLOBAL      1 (factorial)
#  LOAD_FAST        0 (n)
#  LOAD_CONST       1 (1)
#  BINARY_OP        23 (-)
#  CALL             1
#  BINARY_OP        5  (*)
#  RETURN_VALUE

# Inspect the code object
code = factorial.__code__
print(code.co_varnames)   # ('n',) โ€” local variable names
print(code.co_consts)     # (None, 1) โ€” constants
print(code.co_argcount)   # 1 โ€” number of arguments
print(code.co_filename)   # source file

# Compile source to code object
code = compile("x = 1 + 2", "<string>", "exec")
exec(code)

# .pyc files โ€” bytecode cache
# Location: __pycache__/module.cpython-312.pyc
# Format: magic number + timestamp + source size + marshalled code object

13 What are Python’s new features in 3.10, 3.11, 3.12, and 3.13?

Modern Python

  • Python 3.10 โ€” Structural pattern matching (match/case), union types with X | Y syntax, better error messages with exact error locations, itertools.pairwise()
  • Python 3.11 โ€” Major performance release (~25% faster); fine-grained error locations in tracebacks (^ points to exact expression); tomllib for TOML parsing; TaskGroup for structured asyncio concurrency; ExceptionGroup + except*
  • Python 3.12 โ€” Type parameter syntax (type Alias = ..., class Foo[T]:, def func[T]()); @override decorator; **kwargs type hints with TypedDict; f-string improvements (multi-line, nested quotes, any expression); itertools.batched()
  • Python 3.13 โ€” Experimental free-threaded CPython (no GIL! --disable-gil); JIT compiler (experimental); improved REPL with multi-line editing; improved error messages
# Python 3.12 โ€” new type syntax
type Vector = list[float]          # type alias
type Point[T] = tuple[T, T]        # generic type alias

class Stack[T]:                    # generic class, no TypeVar needed
    def push(self, item: T) -> None: ...

def first[T](lst: list[T]) -> T:  # generic function
    return lst[0]

# Python 3.11 โ€” TaskGroup (structured concurrency)
async with asyncio.TaskGroup() as tg:
    t1 = tg.create_task(fetch_users())
    t2 = tg.create_task(fetch_orders())
# All tasks complete (or fail) before exiting the block
# If any task fails, all others are cancelled

# Python 3.11 โ€” ExceptionGroup
try:
    raise ExceptionGroup("multiple errors", [ValueError("v"), TypeError("t")])
except* ValueError as eg:   # except* โ€” catch specific types from the group
    print("ValueError(s):", eg.exceptions)

14 What is the difference between Pydantic, attrs, and dataclasses?

Libraries

  • dataclasses (stdlib) โ€” generates __init__, __repr__, __eq__. No validation, no coercion. Lightweight. Best for pure data containers.
  • attrs (third-party) โ€” more powerful than dataclasses. Validators, converters (coercion), __slots__ support, on-setattr hooks, better performance. Pre-dates dataclasses.
  • Pydantic (third-party) โ€” full data validation and parsing. Coerces input types, raises validation errors, serialises to/from JSON, deep integration with FastAPI. Written in Rust (v2) for speed.
# dataclass โ€” no validation
@dataclass
class Config:
    port: int = 8080
    host: str = "localhost"

Config(port="8080")  # port="8080" โ€” no coercion, no error!

# attrs โ€” validation + conversion
import attrs
@attrs.define
class Config:
    port: int = attrs.field(converter=int, validator=attrs.validators.gt(0))
    host: str = "localhost"

Config(port="8080")  # port=8080 (converted to int)
Config(port=-1)      # raises ValueError

# Pydantic โ€” validation + coercion + serialisation
from pydantic import BaseModel, Field
class Config(BaseModel):
    port: int = Field(gt=0, default=8080)
    host: str = "localhost"

c = Config(port="8080")  # port=8080 (coerced)
c.model_dump()           # {"port": 8080, "host": "localhost"}
c.model_dump_json()      # '{"port":8080,"host":"localhost"}'
Config.model_validate({"port": -1})  # raises ValidationError with details

15 What is Python’s ContextVar and its use in async applications?

Async contextvars.ContextVar (Python 3.7+) provides context-local storage for async code โ€” like thread-local storage but for coroutines and tasks. Each asyncio.Task gets its own context copy.

from contextvars import ContextVar, copy_context
import asyncio

# Create a context variable
request_id: ContextVar[str] = ContextVar("request_id", default="none")
current_user: ContextVar[dict] = ContextVar("current_user")

# Middleware sets the context variable per request
async def request_middleware(request, call_next):
    token = request_id.set(str(uuid.uuid4()))  # set for this request's context
    try:
        return await call_next(request)
    finally:
        request_id.reset(token)  # restore previous value

# Any coroutine in the call chain can access it โ€” no parameter passing needed
async def save_to_db(data):
    rid = request_id.get()  # "abc-123" โ€” unique per request, not per coroutine
    logger.info(f"[{rid}] Saving: {data}")
    await db.insert(data)

# asyncio tasks inherit context from their creation point
async def parent_task():
    request_id.set("req-001")
    task = asyncio.create_task(child_task())  # child inherits "req-001"
    await task

async def child_task():
    print(request_id.get())  # "req-001" โ€” inherited from parent

# copy_context โ€” run code in an explicit context copy
ctx = copy_context()
ctx.run(some_function)  # some_function sees the current context

16 How do you use Python’s multiprocessing.Pool efficiently?

Concurrency

from multiprocessing import Pool, cpu_count
import os

def process_chunk(chunk):
    """Pure function โ€” must be picklable (no lambdas, closures, or bound methods at module level)"""
    return [x ** 2 for x in chunk]

def main():
    data = list(range(1_000_000))
    n_workers = cpu_count()   # use all CPU cores
    chunk_size = len(data) // n_workers

    # Chunk the data
    chunks = [data[i:i+chunk_size] for i in range(0, len(data), chunk_size)]

    with Pool(processes=n_workers) as pool:
        # map โ€” blocks until all results are ready (ordered)
        results = pool.map(process_chunk, chunks)

        # imap โ€” lazy iterator (good for large datasets)
        for result in pool.imap(process_chunk, chunks, chunksize=100):
            store(result)

        # starmap โ€” for functions with multiple arguments
        pool.starmap(process_pair, [(1,2), (3,4), (5,6)])

        # apply_async โ€” fire-and-forget, non-blocking
        async_results = [pool.apply_async(process_chunk, (c,)) for c in chunks]
        results = [r.get(timeout=30) for r in async_results]

if __name__ == "__main__":
    main()  # IMPORTANT: guard required on Windows (spawn start method)

Pitfalls:

  • Worker functions must be defined at module level (not inside functions) to be picklable
  • Lambdas and closures cannot be pickled โ€” use functools.partial instead
  • Process startup overhead is significant โ€” pool reuses workers; create the pool once
  • For shared state, use multiprocessing.Manager or shared_memory, not regular Python objects
17 What is Python’s ast module and how do you use it?

Metaprogramming The ast module lets you parse Python source code into an Abstract Syntax Tree, inspect it, transform it, and compile it back to executable code. Used by linters (flake8, pylint), formatters (black), type checkers (mypy), and transpilers.

import ast

source = """
def add(a: int, b: int) -> int:
    return a + b

result = add(2, 3)
"""

# Parse to AST
tree = ast.parse(source)

# Pretty-print the AST
print(ast.dump(tree, indent=2))

# Walk the AST โ€” find all function definitions
for node in ast.walk(tree):
    if isinstance(node, ast.FunctionDef):
        args = [arg.arg for arg in node.args.args]
        print(f"Function: {node.name}({', '.join(args)})")

# Custom AST transformer โ€” instrument all function calls
class CallCounter(ast.NodeTransformer):
    def visit_Call(self, node):
        self.generic_visit(node)  # visit children first
        # Wrap original call: __counter__.record(original_call)
        return ast.Call(
            func=ast.Attribute(value=ast.Name(id="__counter__"), attr="record"),
            args=[node], keywords=[]
        )

transformed = CallCounter().visit(tree)
ast.fix_missing_locations(transformed)
code = compile(transformed, "<string>", "exec")
exec(code, {"__counter__": counter})

# NodeVisitor โ€” collect info without modifying
class FunctionLister(ast.NodeVisitor):
    def visit_FunctionDef(self, node):
        print(node.name, node.lineno)
        self.generic_visit(node)

FunctionLister().visit(tree)

18 What is Python’s __new__ and how does it differ from __init__?

Internals

  • __new__ โ€” creates and returns a new instance. Called before __init__. Takes cls (the class) as the first argument. Controls object creation.
  • __init__ โ€” initialises an already-created instance. Called after __new__. Takes self. Controls object initialisation.
# obj = MyClass()  internally:
# 1. instance = MyClass.__new__(MyClass)    โ€” allocates the object
# 2. MyClass.__init__(instance)             โ€” initialises it
# 3. return instance

# Singleton via __new__
class Singleton:
    _instance = None

    def __new__(cls):
        if cls._instance is None:
            cls._instance = super().__new__(cls)
        return cls._instance

# Immutable value object via __new__ (like int, str, tuple)
class PositiveInt(int):
    def __new__(cls, value):
        if value <= 0:
            raise ValueError(f"Must be positive, got {value}")
        return super().__new__(cls, value)  # int.__new__ sets the value

n = PositiveInt(5)     # 5
PositiveInt(-1)        # ValueError

# __new__ returns a DIFFERENT type โ€” __init__ is NOT called
class StrangeFactory:
    def __new__(cls, value):
        if value > 0:
            return str(value)   # returns a str, not a StrangeFactory!
        return super().__new__(cls)  # only this case calls __init__

    def __init__(self, value):
        self.value = value

StrangeFactory(5)   # "5" โ€” a str (no __init__ called!)
StrangeFactory(-1)  # StrangeFactory instance (.__init__ called)

19 What is the Python packaging ecosystem? pyproject.toml, hatch, uv, ruff.

Tooling The Python packaging ecosystem has modernised significantly since 2022. The old setup.py / setup.cfg + pip + virtualenv toolchain is being replaced by faster, more ergonomic alternatives.

# pyproject.toml โ€” the modern standard (PEP 517/518/621)
[build-system]
requires      = ["hatchling"]
build-backend = "hatchling.build"

[project]
name            = "my-package"
version         = "1.0.0"
requires-python = ">=3.11"
dependencies    = ["fastapi>=0.110", "pydantic>=2.0", "sqlalchemy>=2.0"]

[project.optional-dependencies]
dev  = ["pytest", "pytest-cov", "ruff", "mypy"]
docs = ["mkdocs", "mkdocs-material"]

[tool.ruff]                          # fast Python linter (replaces flake8 + isort + more)
line-length = 100
target-version = "py311"
select = ["E", "F", "I", "UP", "B"] # error, pyflakes, isort, pyupgrade, bugbear

[tool.mypy]
strict = true
python_version = "3.11"

[tool.pytest.ini_options]
testpaths = ["tests"]
addopts   = "--cov=src --cov-report=term-missing"
# uv โ€” extremely fast package manager written in Rust (replaces pip + venv)
uv init my-project       # create project
uv add fastapi pydantic  # install + update pyproject.toml
uv sync                  # install all deps from lockfile
uv run pytest            # run in the project's environment

# ruff โ€” fast all-in-one linter + formatter (replaces flake8, black, isort)
ruff check .             # lint
ruff format .            # format
ruff check --fix .       # auto-fix linting issues

20 How would you architect a large Python codebase for maintainability?

Architecture

Technology decisions for a large FastAPI/Django project:

  • Framework: FastAPI (async, high performance, auto-docs) or Django (batteries included, admin, ORM)
  • Validation: Pydantic v2 for all data contracts
  • Database: SQLAlchemy 2.0 (ORM + Core) or Django ORM with Alembic/Django migrations
  • Testing: pytest with fixtures, pytest-asyncio, factory_boy for test data, pytest-cov
  • Code quality: ruff (lint + format), mypy (type checking), pre-commit hooks
my_service/
  src/
    config/          # Settings (pydantic-settings BaseSettings), environment
    domain/          # Pure Python business logic โ€” no frameworks, no I/O
      user/
        models.py    # Pydantic models / dataclasses
        service.py   # Business logic (pure functions)
        repository.py # Abstract interface (Protocol)
    infrastructure/  # DB, cache, external APIs โ€” implements domain protocols
      database/
        user_repo.py # Concrete SQLAlchemy implementation
      cache/
        redis.py
    api/             # HTTP layer โ€” thin, delegates to domain
      v1/
        users/
          router.py  # FastAPI routes
          schemas.py # Request/Response Pydantic models
  tests/
    unit/            # Test domain logic in isolation (no DB, no HTTP)
    integration/     # Test with real DB (testcontainers)
    e2e/             # Test full HTTP flow

Key principles: Domain logic has no framework imports. API layer has no business logic. Dependency inversion via Protocol interfaces. Type hints everywhere. 100% test coverage for domain layer.

21 What is Python’s free-threaded mode (Python 3.13+)?

Python 3.13+ Python 3.13 introduced experimental free-threaded CPython (PEP 703) โ€” a build of CPython that removes the GIL, enabling truly parallel execution of multiple threads on multiple CPU cores.

# Install the free-threaded build (separate from the standard CPython build)
# On macOS with Homebrew: brew install python@3.13 --enable-framework
# Or download from python.org โ€” look for "free-threaded" builds

# Check if running in free-threaded mode
import sys
print(sys._is_gil_enabled())  # False in free-threaded build

# With free-threaded Python โ€” threads CAN run in true parallel
import threading
import time

def cpu_task(n):
    total = 0
    for i in range(n): total += i ** 2
    return total

# On standard CPython: both threads interleave, total time โ‰ˆ 2x single-thread
# On free-threaded CPython: both threads run in parallel, total time โ‰ˆ 1x single-thread!
t1 = threading.Thread(target=cpu_task, args=(10_000_000,))
t2 = threading.Thread(target=cpu_task, args=(10_000_000,))
start = time.perf_counter()
t1.start(); t2.start()
t1.join(); t2.join()
print(f"Time: {time.perf_counter() - start:.2f}s")

# Caveats (2025 state):
# - Many C extensions are not yet thread-safe for free-threaded mode
# - Slightly slower for single-threaded code (per-object locking overhead)
# - Experimental โ€” not recommended for production yet
# - NumPy 2.1+ and Pandas 2.2+ have preliminary free-threaded support

Free-threaded Python is the most significant change to the Python runtime in decades. When stable, it will make Python competitive with Go and Java for CPU-bound parallel workloads without needing multiprocessing.

📝 Knowledge Check

These questions mirror real senior-level Python architecture and internals interview scenarios.

🧠 Quiz Question 1 of 5

What does sys.modules provide and why does Python check it first during import?





🧠 Quiz Question 2 of 5

How does structural pattern matching (match/case) differ from a chain of if/elif statements?





🧠 Quiz Question 3 of 5

What is the key difference between typing.Protocol and ABC (Abstract Base Class) inheritance?





🧠 Quiz Question 4 of 5

What does __new__ do that __init__ cannot, and when would you use it?





🧠 Quiz Question 5 of 5

What is the primary significance of Python 3.13’s free-threaded CPython mode?





Tip: Expert Python interviews focus on depth and reasoning. For the GIL, explain reference counting first (why the GIL exists) before explaining its concurrency implications. For descriptors, explain how @property works before writing a custom one. For metaclasses, explain why __init_subclass__ is often a better choice. Framing your answer as: why the feature exists โ†’ how it works โ†’ when to use it โ†’ what the tradeoffs are demonstrates true mastery.