Skip to content

Conversation

jgao54
Copy link
Contributor

@jgao54 jgao54 commented Jun 12, 2025

Avoid re-downloading go modules from scratch every time. Also include build cache as well to speed up build.

@jgao54
Copy link
Contributor Author

jgao54 commented Jun 12, 2025

oops, just realize this Dockerfile is shared in prod as well. probably want to separate out the base image for dev

@@ -17,7 +17,9 @@ RUN rm -f go.work*
# build the binary from flow folder
WORKDIR /root/flow
ENV CGO_ENABLED=1
RUN go build -o /root/peer-flow
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't this be dependent on many things like compiler version and flags? wondering if it's worth it

Copy link
Contributor Author

@jgao54 jgao54 Jun 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this made a pretty big difference for me when tested ./dev-peerdb.sh locally:

  • without it you get a cold build which is 200s+
  • tested out switch between branches with this change and took 15-20s to build

I'll punt on merging this for now though -- want to separate out the mount logic to only local development. in prod we probably benefit from clean build in general -- but it speeds up docker builds for me when testing locally with this change.

@@ -8,7 +8,7 @@ WORKDIR /root/flow
COPY flow/go.mod flow/go.sum ./

# download all the dependencies
RUN go mod download
RUN --mount=type=cache,target=/go/pkg/mod go mod download
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iirc this wouldn't work for some reason, maybe things are better now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants