So another day and another ‘registry out of space’. I wrote earlier about the crappy experience increasing this size (and GKE is still on 1.10 so I can’t use the 1.11+ resize mechanism!!!)
Vowing not to repeat the ‘that can’t be the right method’ that I pioneered in that post, I decided to dig a bit into why it was so large. One of the culprits was a surprise, I’ll share.
So we have a (few) Dockerfiles that do something like:
FROM node:10.8-alpine LABEL maintainer="email@example.com" WORKDIR /usr/src/app COPY . /usr/src/app/package.json RUN npm install
Seems simple enough right? And the current directory really just has about 4kB of data. How could this container be the largest one?
Digging into it… our CI system (gitlab-ci) has a ‘caching’ mechanism between stages, which creates a ‘cache.zip’ of whatever you choose.
In turn, I’m doing a ‘docker save’ in the build step so that the various security scan stages (which run in parallel) are more efficient… they just used the cached copy.
And in turn, gitlab-ci makes this dir in “.cache” in the repo directory (because its mounted that way in Kubernetes, the gitlab-runner has no other storable space).
So what happens is, the first pipeline runs, does a bunch of work, saves some stuff in the cache. Later, this runs again, and the cache increases. But… each docker build incorporates the cache into the image, which is then saved back into the cache.
So… first run… 10M container
Second run, incorporates the first 10M container (from cache) + itself = 20M
Third run… well… 40M
later this gets quite big. Huh.
This is where the ‘.dockerignore‘ file should be used!
And now I know, and now you do to.
Leave a Reply