Optimizing Next.js Docker Images for Faster CI Builds

Optimizing Next.js Docker Images for Faster CI Builds

Dockerizing a Next.js app without any optimization can quickly lead to very large images sometimes even several gigabytes in size. This not only slows down CI pipelines but also increases deployment time and resource usage.

In this post, I’ll walk through the optimizations we applied to JINDO_APP_NEXT to significantly reduce Docker image size, speed up builds, and make deployments much faster and more reliable.

What We Improved

  • Used turbo prune to include only the required dependencies instead of the entire monorepo
  • Switched to Next.js standalone output combined with a multi-stage Docker build
  • Added GitHub Actions caching so Docker layers don’t rebuild on every CI run
  • Enabled Turborepo remote caching to speed up repeated builds

1. Reduce Build Context with turbo prune

In a monorepo, a single Docker build typically treats the entire repository as its build context. This means Docker has to process thousands of files, including code from unrelated applications and libraries, which bloats the image and slows down the build process.

To solve this, we use turbo prune. This command analyzes the dependency graph of a specific workspace (e.g., web) and extracts only the files necessary to build that target..

The --docker flag (disabled by default) prepares a minimal, Docker-friendly workspace that includes only what’s required to build the target app:

RUN turbo prune web --docker

This command generates a pruned output with:

  • json/ – contains only the required package.json files
  • full/ – includes the full source code of internal packages needed for the build
  • Pruned lockfile – a reduced lockfile with just the dependencies required for the target app
out/
├── json/
│   ├── apps/
│   │   └── api/
│   │       └── package.json
│   └── package.json
├── full/
│   ├── apps/
│   │   └── api/
│   │       ├── server.js
│   │       └── package.json
│   ├── package.json
│   └── turbo.json
└── package-lock.json

As a result, Docker builds are faster, more efficient, and much smaller in size.

Docker | Turborepo
Learn how to use Docker in a monorepo.

2. Next.js Standalone Output + Multi-Stage Docker Build

By default, a Next.js production build still depends on a large node_modules folder at runtime, which makes Docker images bigger than needed. To solve this, we switched to Next.js standalone output by setting output: "standalone" in next.config.js.

module.exports = 
{  
  output: "standalone"
};

Standalone output creates a minimal production server that includes only the files and dependencies actually needed to run the app. During the build, Next.js analyzes all imports, requires, and file system usage to determine which files each page might load. This traced output ensures that only necessary files are included, reducing image size, improving startup speed, and keeping the production container clean.

In short, standalone output lets you ship just what the server needs, making your Docker image smaller, faster, and more efficient for production.

https://nextjs.org/docs/pages/api-reference/config/next-config-js/output

Multi-stage Docker build

A single-stage build puts everything—your source code, compilers, and build tools—into one final image, which often makes the file size unnecessarily large and includes extra files that aren't needed to actually run the app. In contrast, a multi-stage build uses several "stages" in one Dockerfile to keep things lean; you use one stage to compile your code and a second, much smaller stage to copy only the finished "ready-to-run" files into the final image. Think of it like a kitchen: a single-stage build is like serving a meal with all the messy pots and pans still on the table, while a multi-stage build allows you to cook in the kitchen but only bring the clean, finished plate to the dining room.

We split the Dockerfile into five stages, where each stage has a specific responsibility. Only the final stage is used to run the application in production, which helps keep the image small and focused.

The base stage defines the shared setup. It selects the Node.js version, enables pnpm, and sets the working directory. All other stages build on top of this to avoid repeating the same configuration.

The prune stage uses turbo prune <app> --docker to copy only the files and dependencies required by the target app. By removing unrelated packages from the monorepo, the dependency graph becomes smaller, resulting in faster installs and a reduced build context.

 turbo prune <app> --docker 

The installer stage copies the pruned package.json and lockfile and installs the required dependencies. Since only the necessary files are present, dependency installation is faster and benefits more from Docker layer caching. This stage contains the node_modules.

The builder stage uses the installer stage as its base. It defines build-time environment variables, copies the full pruned source code, and runs the build using turbo build. During this step, Next.js generates a standalone output, which bundles the application code together with only the runtime dependencies needed to run the server.

The runner stage is the final production image. It copies only the files produced by the standalone build, including the Next.js server, static assets, and public files. Because the standalone output already includes the required runtime dependencies, the runner image does not need the full node_modules or source code.

Using Next.js standalone output allows the production image to stay minimal, improves container startup time, and reduces the overall image size, making it better suited for production environments.

# ───────────────────────────────
# 🧱 BASE STAGE
# ───────────────────────────────
FROM node:20-alpine AS base
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /app

# ───────────────────────────────
# 🧹 PRUNE STAGE
# ───────────────────────────────
FROM base AS pruner
RUN pnpm add -g [email protected]
COPY . .
RUN turbo prune web --docker

# ───────────────────────────────
# 📦 INSTALLER STAGE
# ───────────────────────────────
FROM base AS installer
RUN apk add --no-cache libc6-compat
ENV NODE_ENV=production 

COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/out/pnpm-lock.yaml ./pnpm-lock.yaml

# FIX: Added --ignore-scripts. 
# This stops Panda CSS from trying to run before its config file exists.
RUN --mount=type=cache,id=pnpm,target=/pnpm/store \
    pnpm install --frozen-lockfile --ignore-scripts

# ───────────────────────────────
# 🏗️ BUILDER STAGE
# ───────────────────────────────
FROM installer AS builder
WORKDIR /app

# ── NEXT_PUBLIC (build-time, baked into Next.js) ──
ARG NEXT_PUBLIC_BASE_URL
ARG NEXT_PUBLIC_URL
ARG NEXT_PUBLIC_ENV
ARG NEXT_PUBLIC_HOCUSPOCUS_URL
ARG NEXT_PUBLIC_SONGS_URL
ARG NEXT_PUBLIC_SONGS_SAFARI_URL
ARG NEXT_PUBLIC_SOUND_EFFECTS_URL

ENV NEXT_PUBLIC_BASE_URL=$NEXT_PUBLIC_BASE_URL
ENV NEXT_PUBLIC_URL=$NEXT_PUBLIC_URL
ENV NEXT_PUBLIC_ENV=$NEXT_PUBLIC_ENV
ENV NEXT_PUBLIC_HOCUSPOCUS_URL=$NEXT_PUBLIC_HOCUSPOCUS_URL
ENV NEXT_PUBLIC_SONGS_URL=$NEXT_PUBLIC_SONGS_URL
ENV NEXT_PUBLIC_SONGS_SAFARI_URL=$NEXT_PUBLIC_SONGS_SAFARI_URL
ENV NEXT_PUBLIC_SOUND_EFFECTS_URL=$NEXT_PUBLIC_SOUND_EFFECTS_URL

# ── Node / Turbo / Sentry (build-time) ──
ARG NODE_ENV
ARG TURBO_TOKEN
ARG TURBO_TEAM
ARG SENTRY_AUTH_TOKEN

ENV NODE_ENV=$NODE_ENV
ENV TURBO_TOKEN=$TURBO_TOKEN
ENV TURBO_TEAM=$TURBO_TEAM
ENV SENTRY_AUTH_TOKEN=$SENTRY_AUTH_TOKEN

# 1. Now we copy the full source (including panda.config.ts)
COPY --from=pruner /app/out/full/ .

# 2. Run the scripts that we ignored earlier (Panda codegen, etc.)
# This ensures the Panda styles are generated before the build starts.
RUN pnpm install --frozen-lockfile

# 3. Copy .env last to prevent busting the install cache
COPY .env .env

RUN npx turbo build --filter=web

# ───────────────────────────────
# 🚀 RUNNER (PRODUCTION) STAGE
# ───────────────────────────────
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs

# Copy standalone build files
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/.next/static ./apps/web/.next/static
COPY --from=builder --chown=nextjs:nodejs /app/apps/web/public ./apps/web/public

USER nextjs
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "apps/web/server.js"]

3. GitHub Actions Docker Layer Caching

When running Docker builds in GitHub Actions, every workflow run can start from scratch by default. This means installing dependencies, setting up the environment, and building the app all over again.

To save time, GitHub Actions provides a runner cache, which allows you to store and reuse parts of your build between runs.

With Docker, this cache is applied to image layers. Using cache-from, Docker can reuse layers from previous builds instead of rebuilding them. Layers that haven’t changed, like the base Node.js setup or installed dependencies, don’t need to be rebuilt. At the same time, cache-to stores newly built layers so that future runs can reuse them.

Example: Build and Push Docker Image

    
    # 2️⃣ INITIALIZE BUILDER (This fixes your error)
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
        with:
          driver: docker-container
          install: true  
    ... Rest of your workflow logic

    # 7️⃣ Build and Push Docker Image
      - name: Build and Push Docker Image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          # We use env.IMAGE_TAG which you defined in step 2
          tags: ${{ env.GAR_LOCATION }}:${{ env.IMAGE_TAG }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
          build-args: |
            NEXT_PUBLIC_BASE_URL=${{ env.NEXT_PUBLIC_BASE_URL }}
            ... OTHER BUILD TIME ARGS

How Caching Works in Multi-Stage Builds

Previous Build Layers (from cache)
----------------------------------
[ Base Node Image ]
[ Install Dependencies ]
[ Build App ]
[ Final Image ]

New Build
---------
Step 1: FROM Base Node Image         <-- reused from cache
Step 2: Install Dependencies        <– reused from cache
Step 3: Build App                    <– only rebuilds changed parts
Step 4: Copy to Final Image          <– final image created

Explanation:

  • cache-from → pulls layers from previous builds to reuse unchanged steps.
  • cache-to → stores new or updated layers for future workflow runs.
  • Only changed steps are rebuilt, saving time and resources.

https://www.blacksmith.sh/blog/cache-is-king-a-guide-for-docker-layer-caching-in-github-actions


4. Turborepo Remote Caching

Turborepo is a high-performance build system for JavaScript and TypeScript monorepos. It optimizes development and CI/CD by caching build artifacts and only rebuilding what’s necessary.

What is Remote Caching?

Remote caching allows Turborepo to store build artifacts—such as compiled code, test results, and bundled assets—on a shared remote cache. This cache can be accessed by multiple developers and CI/CD pipelines.

If a task (like build or test) has already run with the same inputs, Turborepo can reuse the cached output instead of running the task again, saving time and resources.

Setting Up Turborepo Remote Cache with Vercel

Turborepo’s remote caching is backed by Vercel. To enable it, you need to authenticate Turborepo with a Vercel account using a token.

Step 1: Create a Vercel Access Token

Turborepo remote caching is backed by Vercel.

  1. Go to Vercel Dashboard → Account Settings
  2. Open Tokens
  3. Click Create Token
  4. Select Full Access
  5. Name it (e.g. turborepo-cache)
  6. Copy the generated token

Step 2: Authenticate Turborepo (One-Time Setup)

From the root of your monorepo, run:

npx turbo login

This opens the browser and authenticates Turborepo with your Vercel account.

Next, link the repository to a Vercel project:

npx turbo link

Step 3: Get Your Turborepo Team ID

After linking, Turborepo creates a config file:

cat .turbo/config.json

Example output:

{
  "teamId": "YOUR_TEAM_ID"
}

You will need this teamId during builds.


Step 4: Enable Remote Caching During Docker Builds

Pass the Turborepo token and team ID as build-time arguments.

ARG TURBO_TOKEN
ARG TURBO_TEAM

ENV TURBO_TOKEN=$TURBO_TOKEN
ENV TURBO_TEAM=$TURBO_TEAM

When these variables are available, Turborepo automatically enables remote caching.
You should see “Remote caching enabled” in the build logs.


Turborepo can sign artifacts with a secret key before uploading them to the Remote Cache. Turborepo uses HMAC-SHA256 signatures on artifacts using a secret key you provide. Turborepo will verify the Remote Cache artifacts' integrity and authenticity when they're downloaded. Any artifacts that fail to verify will be ignored and treated as a cache miss by Turborepo.

Enable Signatures in turbo.json

{
  "remoteCache": {
    "signature": true
  }
}

Provide the Signature Key

export TURBO_REMOTE_CACHE_SIGNATURE_KEY="your-secret-key"

Turborepo Remote Caching Docs


Read more