Functional programming patterns treat functions as data — functions are values that can be passed around, stored, and combined just like numbers or strings. Python is a multi-paradigm language that supports these patterns without requiring you to commit to a purely functional style. The most useful functional patterns for FastAPI development are partial application (creating specialised versions of generic functions), function composition (building pipelines), and the operator module (replacing trivial lambdas with standard function objects). These patterns reduce boilerplate, improve readability, and make code easier to test in isolation.
Partial Application with functools.partial
from functools import partial
# partial(func, *args, **kwargs) creates a new function with some arguments pre-filled
def power(base: float, exponent: float) -> float:
return base ** exponent
# Create specialised versions by pre-filling arguments
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(4)) # 16
print(cube(3)) # 27
print(square(10)) # 100
# ── Database query factory ────────────────────────────────────────────────────
def query_posts(db, *, published: bool = None, author_id: int = None,
limit: int = 10, offset: int = 0) -> list:
q = db.query(Post)
if published is not None:
q = q.filter(Post.published == published)
if author_id is not None:
q = q.filter(Post.author_id == author_id)
return q.offset(offset).limit(limit).all()
# Create specialised query functions
get_published = partial(query_posts, published=True)
get_drafts = partial(query_posts, published=False)
get_user_posts = lambda db, user_id: query_posts(db, author_id=user_id)
# Use them cleanly
published_posts = get_published(db, limit=20)
draft_posts = get_drafts(db)
# ── Partial with validators ───────────────────────────────────────────────────
def validate_length(value: str, min_len: int, max_len: int, field: str) -> str:
if len(value) < min_len:
raise ValueError(f"{field} must be at least {min_len} chars")
if len(value) > max_len:
raise ValueError(f"{field} must be at most {max_len} chars")
return value.strip()
validate_title = partial(validate_length, min_len=3, max_len=200, field="title")
validate_body = partial(validate_length, min_len=10, max_len=50000, field="body")
validate_name = partial(validate_length, min_len=2, max_len=100, field="name")
functools.partial creates a new callable, not a new function — isinstance(square, partial) is True, not isinstance(square, function). Partial objects do not have a __name__ attribute by default, which can make error messages and logging less clear. Add a name with square.__name__ = "square" or use partial alongside functools.update_wrapper(square, power) if you need the full function metadata.partial to configure a function for a specific context without writing a wrapper function. It is especially useful in FastAPI for configuring dependency functions with environment-specific settings: get_test_db = partial(get_db, url=TEST_DATABASE_URL). This lets you reuse the same dependency function with different configurations for testing and production without duplicating code.partial for pre-filling self in class methods — use bound methods instead. partial(MyClass.method, instance) is valid but unconventional; the standard way is instance.method. Also, partial objects do not support pickling by default (which matters for multiprocessing), and they do not support keyword argument introspection the same way regular functions do.Function Composition
from functools import reduce
from typing import Callable, TypeVar
T = TypeVar("T")
# ── compose: apply functions right-to-left (mathematical convention) ──────────
def compose(*funcs: Callable) -> Callable:
"""compose(f, g, h)(x) = f(g(h(x)))"""
def composed(x):
return reduce(lambda v, f: f(v), reversed(funcs), x)
return composed
# ── pipe: apply functions left-to-right (data pipeline convention) ─────────────
def pipe(*funcs: Callable) -> Callable:
"""pipe(f, g, h)(x) = h(g(f(x)))"""
def piped(x):
return reduce(lambda v, f: f(v), funcs, v)
return piped
# ── Data transformation pipeline ─────────────────────────────────────────────
def normalise_email(email: str) -> str:
return email.strip().lower()
def remove_subaddress(email: str) -> str:
"""alice+newsletter@gmail.com → alice@gmail.com"""
local, _, domain = email.partition("@")
local_clean = local.split("+")[0]
return f"{local_clean}@{domain}"
def validate_email_format(email: str) -> str:
if "@" not in email or "." not in email.split("@")[-1]:
raise ValueError(f"Invalid email: {email}")
return email
# Compose into a single transform function
process_email = compose(
validate_email_format, # runs last
remove_subaddress, # runs second
normalise_email, # runs first (closest to input)
)
# Left-to-right with pipe is more readable for pipelines
process_email_pipe = pipe(
normalise_email, # runs first
remove_subaddress, # runs second
validate_email_format, # runs last
)
print(process_email_pipe(" Alice+News@GMAIL.COM "))
# "alice@gmail.com"
The operator Module
import operator
from functools import reduce
# operator module provides function versions of Python operators
# Use instead of trivial lambdas for clarity and slight performance benefit
# ── Common operator functions ─────────────────────────────────────────────────
operator.add(3, 4) # 7 same as lambda a, b: a + b
operator.mul(3, 4) # 12 same as lambda a, b: a * b
operator.lt(3, 4) # True same as lambda a, b: a < b
operator.eq("a", "a") # True same as lambda a, b: a == b
operator.not_(False) # True same as lambda x: not x
operator.getitem([1,2,3], 1) # 2 same as lambda x, i: x[i]
# ── attrgetter, itemgetter ────────────────────────────────────────────────────
from operator import attrgetter, itemgetter
# itemgetter — get a key from a dict (more efficient than lambda)
get_name = itemgetter("name")
get_views = itemgetter("views")
posts = [{"name": "B", "views": 50}, {"name": "A", "views": 100}]
sorted_by_name = sorted(posts, key=get_name) # cleaner than key=lambda p: p["name"]
sorted_by_views = sorted(posts, key=get_views, reverse=True)
# attrgetter — get an attribute from an object
get_title = attrgetter("title")
sorted_posts = sorted(db_posts, key=get_title) # cleaner than key=lambda p: p.title
# ── methodcaller ──────────────────────────────────────────────────────────────
from operator import methodcaller
lower = methodcaller("lower")
strip = methodcaller("strip")
split = methodcaller("split", ",") # args passed to the method
tags = [" Python ", " FastAPI ", " PostgreSQL "]
clean_tags = list(map(methodcaller("strip"), tags))
# ["Python", "FastAPI", "PostgreSQL"]
# ── reduce with operator ──────────────────────────────────────────────────────
numbers = [1, 2, 3, 4, 5]
product = reduce(operator.mul, numbers) # 120 — cleaner than lambda a, b: a * b
total = reduce(operator.add, numbers) # 15
Applying These Patterns in FastAPI
from functools import partial
from operator import itemgetter
# ── Dependency configuration via partial ─────────────────────────────────────
def get_paginated(
db,
model,
*,
page: int = 1,
limit: int = 10,
order_by: str = "created_at",
published_only: bool = False,
) -> list:
q = db.query(model)
if published_only:
q = q.filter(model.published == True)
return q.order_by(order_by).offset((page - 1) * limit).limit(limit).all()
# Create route-specific partial
get_published_posts = partial(get_paginated, model=Post, published_only=True)
@app.get("/posts")
async def list_posts(page: int = 1, limit: int = 10, db = Depends(get_db)):
return get_published_posts(db, page=page, limit=limit)
# ── Sorting response data with operator ──────────────────────────────────────
def sort_and_paginate(items: list, sort_by: str, desc: bool, page: int, size: int):
sort_key = itemgetter(sort_by) # clean, no lambda needed
sorted_items = sorted(items, key=sort_key, reverse=desc)
start = (page - 1) * size
return sorted_items[start:start + size]
Common Mistakes
Mistake 1 — Using lambda where operator functions are cleaner
❌ Wrong — verbose lambda for a standard operation:
sorted(posts, key=lambda p: p["views"])
all_ids = [p["id"] for p in posts]
✅ Correct — operator module:
sorted(posts, key=itemgetter("views")) # ✓ cleaner
all_ids = list(map(itemgetter("id"), posts)) # ✓ functional
Mistake 2 — Composing functions in the wrong order
❌ Wrong — math convention (right-to-left) confused with pipeline (left-to-right):
process = compose(normalise, validate, strip)
# normalise runs last, strip runs first — confusing order for data pipelines
✅ Correct — use pipe() for left-to-right data pipelines:
process = pipe(strip, normalise, validate) # ✓ reads in execution order
Mistake 3 — Overusing functional patterns for simple imperative code
❌ Wrong — functional gymnastics that obscures intent:
result = list(map(compose(str.strip, str.lower), filter(partial(operator.contains, valid_set), items)))
✅ Correct — simple list comprehension is clearer:
result = [item.strip().lower() for item in items if item in valid_set] # ✓
Quick Reference
| Pattern | Code | Use When |
|---|---|---|
| Partial application | partial(func, kwarg=val) |
Creating specialised versions of generic functions |
| Compose (R→L) | compose(f, g, h)(x) = f(g(h(x))) |
Mathematical function composition |
| Pipe (L→R) | pipe(f, g, h)(x) = h(g(f(x))) |
Data transformation pipelines |
| Item access | itemgetter("key") |
Replace lambda x: x["key"] |
| Attr access | attrgetter("attr") |
Replace lambda x: x.attr |
| Method call | methodcaller("lower") |
Replace lambda x: x.lower() |
| Reduce | reduce(operator.mul, items) |
Fold a list to a single value |