Docker Volumes and Networking — Data Persistence and Service Isolation

Docker volumes and networking are the mechanisms that make containerised applications both persistent and interconnected. Volumes ensure that MongoDB data survives container restarts, that uploaded files are not lost when a container is replaced, and that logs are accessible from outside the container. Networking controls which containers can communicate with each other — isolating the database from the public internet while allowing the API to reach it. Understanding these two systems is essential for building MEAN Stack deployments that are both secure and operationally reliable.

Volume Types

Type Syntax Use For Managed By
Named volume mongodb_data:/data/db Database files, Redis data — persisted by Docker Docker (stored in /var/lib/docker)
Bind mount ./api:/app Source code for hot reload in development Host filesystem
Anonymous volume /app/node_modules Generated files that should not persist Docker (no name, hard to manage)
tmpfs mount type: tmpfs, target: /tmp Temporary data that must not persist (test data, caches) Memory — not disk

Docker Network Driver Types

Driver Use Case Default
bridge Isolated virtual network — most common for Docker Compose Yes — default for Compose
host Container shares host network — no isolation, no port mapping needed No
none No network — completely isolated container No
overlay Multi-host networking for Docker Swarm No — Swarm only
Note: Named volumes persist data across container recreations. When you run docker compose down and docker compose up again, named volumes survive — your MongoDB data is intact. When you run docker compose down -v, named volumes are deleted — use this only to start fresh (e.g. to re-seed the database). In production, never run down -v — it destroys your database. Always back up MongoDB data before any volume operations.
Tip: Use multiple networks to segment your services by access level. Put MongoDB and Redis on a private backend network that only the API can access. Put the API and nginx on a frontend network. The public internet reaches nginx; nginx reaches the API; only the API reaches the databases. MongoDB and Redis are never directly reachable from the public internet — even if someone gains access to the nginx or Angular containers, they cannot reach the database.
Warning: Bind mounts in production create a coupling between the container and the specific host filesystem path. If the container is moved to a different host or the path changes, the container breaks. Use named volumes for production data — they are portable and managed by Docker. Bind mounts are a development-only pattern for live reloading. The only exception is configuration files that need to be updated without rebuilding the image.

Complete Volume and Networking Configuration

# Network segmentation — security through isolation
services:

  nginx:
    image: nginx:alpine
    networks:
      - frontend    # public-facing — accessible from outside
    ports:
      - "80:80"
      - "443:443"

  api:
    build: ./api
    networks:
      - frontend    # nginx can reach api
      - backend     # api can reach mongodb and redis

  mongodb:
    image: mongo:7
    networks:
      - backend     # ONLY accessible from backend network
    # No ports — not reachable from outside Docker network

  redis:
    image: redis:7-alpine
    networks:
      - backend

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true  # backend network has no external internet access at all
# Volume management commands
docker volume ls                              # list all named volumes
docker volume inspect taskmanager_mongodb_data # inspect volume metadata (location)
docker volume prune                           # remove all unused volumes — CAREFUL!
docker volume rm volume_name                  # remove a specific volume

# MongoDB backup — copy data out of running container
docker exec taskmanager_mongo \
    mongodump --out /tmp/backup --uri "mongodb://admin:secret@localhost:27017/taskmanager"
docker cp taskmanager_mongo:/tmp/backup ./backup-$(date +%Y%m%d)

# MongoDB restore
docker cp ./backup taskmanager_mongo:/tmp/restore
docker exec taskmanager_mongo \
    mongorestore --uri "mongodb://admin:secret@localhost:27017" /tmp/restore

# Inspect what volumes a container has mounted
docker inspect taskmanager_api --format '{{json .Mounts}}'

# Check container network connectivity
docker exec taskmanager_api ping mongodb       # should resolve to mongodb container IP
docker exec taskmanager_api wget -qO- http://mongodb:27017  # raw TCP test
// health-check route — verifies DB and Redis connectivity
// src/routes/health.routes.js
const router = require('express').Router();
const mongoose = require('mongoose');

router.get('/health', async (req, res) => {
    const health = {
        status:    'ok',
        timestamp: new Date().toISOString(),
        uptime:    process.uptime(),
        services:  {},
    };

    // Check MongoDB
    const mongoState = mongoose.connection.readyState;
    health.services.mongodb = {
        status: mongoState === 1 ? 'connected' : 'disconnected',
        state:  ['disconnected','connected','connecting','disconnecting'][mongoState],
    };

    // Check Redis (if available)
    try {
        const redis = require('../config/redis');
        await redis.ping();
        health.services.redis = { status: 'connected' };
    } catch {
        health.services.redis = { status: 'disconnected' };
    }

    const allHealthy = Object.values(health.services).every(s => s.status === 'connected');
    health.status = allHealthy ? 'ok' : 'degraded';

    res.status(allHealthy ? 200 : 503).json(health);
});

module.exports = router;

How It Works

Step 1 — Named Volumes Are Managed by the Docker Daemon

Named volumes like mongodb_data are stored by Docker in /var/lib/docker/volumes/ on the host. Docker manages their lifecycle independently of containers — creating, mounting, and unmounting them as containers start and stop. Unlike bind mounts, named volumes work on any OS (Linux, macOS, Windows) without path translation issues. When a MongoDB container is replaced (during an update), the new container mounts the same volume and accesses the same data.

Step 2 — Bridge Networks Create Isolated DNS Domains

Each bridge network has its own DNS namespace. Containers on the same network resolve each other by service name. Containers on different networks cannot communicate unless a container is connected to both networks (like the api service, which is on both frontend and backend). The internal: true flag on the backend network prevents containers in that network from accessing external internet — useful for databases that should never initiate outbound connections.

Step 3 — Health Route Exposes Service Dependency Status

The /health endpoint checks not just that the Express server is running, but that its dependencies are reachable. A 200 response means everything is healthy; a 503 response means the API is running but a dependency is unavailable. Docker’s HEALTHCHECK calls this endpoint. Container orchestrators use the health status for routing decisions — an unhealthy container does not receive traffic and is eventually replaced.

Step 4 — Volume Backups Require exec + cp

Named volumes are not directly accessible on the host filesystem (on macOS and Windows, Docker volumes exist inside the Docker VM). To back up MongoDB data, use docker exec to run mongodump inside the container and write the output to a temporary path, then use docker cp to copy the backup from the container to the host. Automate this with a cron job or as part of your deployment pipeline.

Step 5 — Network Isolation Limits Breach Impact

If the nginx container is compromised (through a vulnerability in nginx itself), the attacker is on the frontend network and can reach the api container. But they cannot reach MongoDB or Redis — those are only on the backend network, which the nginx container is not a member of. Network segmentation creates defence-in-depth — a compromise in one layer does not automatically grant access to all layers.

Common Mistakes

Mistake 1 — Running docker compose down -v in production

❌ Wrong — deletes all named volumes including the database:

docker compose down -v   # DESTROYS mongodb_data volume — all data gone!

✅ Correct — use down without -v to preserve data:

docker compose down      # containers removed, volumes preserved
docker compose up -d     # containers recreated, existing volumes remounted

Mistake 2 — All services on the same network (no isolation)

❌ Wrong — nginx can directly reach MongoDB:

services:
    nginx:    { networks: [app] }
    api:      { networks: [app] }
    mongodb:  { networks: [app] }   # nginx can reach mongodb:27017!

✅ Correct — separate networks with controlled access:

nginx:   { networks: [frontend] }
api:     { networks: [frontend, backend] }
mongodb: { networks: [backend] }   # nginx cannot reach mongodb

Mistake 3 — Using bind mount for production database data

❌ Wrong — data tied to specific host path, breaks on host migration:

mongodb:
    volumes:
        - /opt/mongodb/data:/data/db   # host-specific path — fragile in production

✅ Correct — use named volumes (portable, managed by Docker):

mongodb:
    volumes:
        - mongodb_data:/data/db   # named volume — works on any Docker host

Quick Reference

Task Code
Named volume for data mongodb_data:/data/db in volumes section
Bind mount source ./api:/app — dev only
Isolate backend services networks: [backend] + internal: true on backend network
API on both networks networks: [frontend, backend]
List volumes docker volume ls
Backup MongoDB docker exec mongo mongodump --out /tmp/bk + docker cp mongo:/tmp/bk ./bk
Health endpoint GET /api/v1/health — 200 OK or 503 Degraded
Down without data loss docker compose down (no -v)

🧠 Test Yourself

The MongoDB container is restarted to apply a configuration change. The application data must be preserved. Which volume type should be used and why?