How do I make the best use of `turbo prune --docker`?

My current Dockerfile:

# Set Bun version
ARG BUN_VERSION=1.3.6

# ============================================
# Stage 1: Dependencies Installation
# ============================================
FROM oven/bun:${BUN_VERSION}-slim AS deps
WORKDIR /usr/src/app

# Copy package files and install dependencies
# Include all workspace packages and utils subpackages
COPY package.json bun.lock turbo.json ./
COPY packages/utils/date/package.json ./packages/utils/date/
COPY packages/utils/number/package.json ./packages/utils/number/
COPY packages/utils/object/package.json ./packages/utils/object/
COPY packages/utils/string/package.json ./packages/utils/string/
COPY packages/db/package.json ./packages/db/
COPY packages/auth/package.json ./packages/auth/
COPY packages/client/package.json ./packages/client/
COPY packages/config/package.json ./packages/config/
COPY packages/core/package.json ./packages/core/
COPY apps/backend/package.json ./apps/backend/

RUN bun install
[...]

In order to take advantage of the turbo prune @mysecretpackage/backend --docker command. I need to change to something like this:

# Set Bun version
ARG BUN_VERSION=1.3.6

# ============================================
# Stage 1: Dependencies Installation
# ============================================
FROM oven/bun:${BUN_VERSION}-slim AS deps
WORKDIR /usr/src/app

# Copy the entire monorepo
COPY . .

# Prune to only what's needed for @diet-it/backend
RUN bunx turbo prune @diet-it/backend --docker

It sure runs faster (from avg 60 seconds to average 20 seconds), and the image size is pretty close (from 500Mb to 570MB), but the COPY . . just nukes the docker cache, doesn’t it?
Is this correct, should I just ignore docker cache and trust the turborepo .turbo cache?
Is there a way to use COPY while using prune at the same time?

This issue suggest the docker cache might still be working even with the COPY . . at the first state. So I might not understand this entirely.

I think I understand it now. The COPY . . takes 0.2 seconds, if its output doesn’t change, the next stages can reuse the cache. This is awesome.


The docs could be clearer about how awesome this feature is.

1 Like