Node.js with npm
Best practices for Dockerfile for Node.js with npm
π³ Annotated Dockerfile for Node.js with npm:
# Use Node.js LTS as the base image for consistency and long-term support
FROM node:lts-jod AS base
# Stage 1: Install production dependencies using npm
FROM base AS deps
# Set working directory
WORKDIR /app
# Copy only package definition files first
COPY package.json package-lock.json* ./
# Install only production dependencies
# Use npm ci for faster, reliable installations from lockfile
RUN --mount=type=cache,id=npm,target=/root/.npm \
npm ci --production
# Stage 2: Build the application
FROM base AS build
WORKDIR /app
# Copy package definitions to maintain consistent build context
COPY package.json package-lock.json* ./
# Install all dependencies (dev + prod) for building the app
RUN --mount=type=cache,id=npm,target=/root/.npm \
npm ci
# Copy entire source code
COPY . .
# Run build script defined in your package.json (generally builds into a "dist" directory)
RUN npm run build
# Stage 3: Create the final lightweight production image
FROM base
# Set working directory
WORKDIR /app
# Copy only production dependencies (no dev dependencies)
COPY --from=deps /app/node_modules /app/node_modules
# Copy compiled application output (dist directory)
COPY --from=build /app/dist /app/dist
# Explicitly set environment to production
ENV NODE_ENV production
# Default command to run your Node.js application
CMD ["node", "./dist/index.js"]
π Why these are best practices:
β Multi-stage builds
- Smaller final images: Dependencies and build tools are discarded after use, reducing container size.
- Security: Fewer files and tools mean a smaller attack surface.
β Using npm ci instead of npm install
- Deterministic builds: Ensures exact versions from package-lock.json are used.
- Faster than npm install: Bypasses dependency resolution for clean installations.
- CI-friendly: Designed specifically for automated environments.
β Caching npm modules
- Faster builds: Reusing the npm cache reduces install times significantly.
- Lower CI/CD overhead: Speeds up continuous integration and deployment workflows.
β Separating dependencies and build stages
- Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize.
- Improved cache efficiency: Changes in code don't trigger unnecessary reinstallation of unchanged dependencies.
β Minimal runtime image
- Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities.
- Lower resource consumption: Optimized resource usage in production deployments.
π Additional Dockerfile best practices you can adopt:
Use a non-root user
For enhanced security, run your app as a non-root user:
FROM base
# Create a non-root user
RUN useradd -m appuser
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY --from=build /app/dist /app/dist
ENV NODE_ENV production
# Switch to non-root user
USER appuser
CMD ["node", "./dist/index.js"]
Use HEALTHCHECK directive
Allows Docker to monitor container health automatically.
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
Use explicit .dockerignore
Prevent copying unnecessary files into your image.
Example .dockerignore
node_modules
dist
coverage
.git
Dockerfile
docker-compose.yml
README.md
*.log
Set resource limits explicitly
When deploying containers, always set CPU and memory limits to avoid resource starvation or instability.
Example in Kubernetes or Docker Compose (outside Dockerfile)
resources:
limits:
cpu: 1000m
memory: 1Gi
Consider using Distroless or Alpine images
Switch to even lighter-weight base images if you're comfortable handling potential compatibility issues:
FROM node:22-alpine AS base
Or distroless:
FROM gcr.io/distroless/nodejs22-debian12 AS final
By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintainβideal for modern production workflows.
Last updated on