Node.js with Bun
Best practices for Dockerfile for Node.js with Bun
π³ Annotated Dockerfile for Node.js with Bun:
# Use Bun's official image as the base
FROM oven/bun:1 AS base
# Stage 1: Install production dependencies
FROM base AS deps
# Set working directory
WORKDIR /app
# Copy only package definition files first
COPY package.json bun.lock ./
# Install only production dependencies
# Using --frozen ensures exact versions from lockfile are used
RUN --mount=type=cache,id=bun,target=/root/.bun/install/cache \
bun install --frozen-lockfile --production
# Stage 2: Build the application
FROM base AS build
WORKDIR /app
# Copy package definitions to maintain consistent build context
COPY package.json bun.lock ./
# Install all dependencies (dev + prod) for building the app
RUN --mount=type=cache,id=bun,target=/root/.bun/install/cache \
bun install --frozen-lockfile
# Copy entire source code
COPY . .
# Run build script defined in your package.json
RUN bun run build
# Stage 3: Create the final lightweight production image
FROM base
# Set working directory
WORKDIR /app
# Copy only production dependencies (no dev dependencies)
COPY --from=deps /app/node_modules /app/node_modules
# Copy compiled application output (dist directory)
COPY --from=build /app/dist /app/dist
# Explicitly set environment to production
ENV NODE_ENV production
# Default command to run your application with Bun (adjust path as needed)
CMD ["bun", "run", "./dist/index.js"]
π Why these are best practices:
β Multi-stage builds
- Smaller final images: Dependencies and build tools are discarded after use, reducing container size.
- Security: Fewer files and tools mean a smaller attack surface.
β Caching Bun modules
- Faster builds: Bun already installs dependencies up to 30x faster than npm, and caching makes it even faster.
- Lower CI/CD overhead: Speeds up continuous integration and deployment workflows.
β Separating dependencies and build stages
- Clear separation of concerns: Each stage serves a single purpose, making it easier to debug and optimize.
- Improved cache efficiency: Changes in code don't trigger unnecessary reinstallation of unchanged dependencies.
β Minimal runtime image
- Performance and security: Only the essential runtime code is present, limiting potential vulnerabilities.
- Lower resource consumption: Optimized resource usage in production deployments.
π Additional Dockerfile best practices you can adopt:
Use a non-root user
For enhanced security, run your app as a non-root user:
FROM base
# Create a non-root user
RUN adduser --disabled-password --gecos "" appuser
WORKDIR /app
COPY --from=deps /app/node_modules /app/node_modules
COPY --from=build /app/dist /app/dist
ENV NODE_ENV production
# Switch to non-root user
USER appuser
CMD ["bun", "run", "./dist/index.js"]
Use HEALTHCHECK directive
Allows Docker to monitor container health automatically.
HEALTHCHECK --interval=30s --timeout=3s \
CMD curl -f http://localhost:3000/health || exit 1
Use explicit .dockerignore
Prevent copying unnecessary files into your image.
Example .dockerignore
node_modules
dist
coverage
.git
Dockerfile
docker-compose.yml
README.md
*.log
Set resource limits explicitly
When deploying containers, always set CPU and memory limits to avoid resource starvation or instability.
Example in Kubernetes or Docker Compose (outside Dockerfile)
resources:
limits:
cpu: 1000m
memory: 1Gi
By following these annotations and best practices, your Docker images become faster to build, more secure, smaller, and easier to maintainβideal for modern production workflows.
Last updated on