Cloud simplicity: NOT!

(queue wayne’s world music on the ‘NOT!’).

So. Gitlab, Gitlab-runner, Kubernetes, Google Cloud Platform, Google Kubernetes Engine, Google Cloud Storage. Helm. Minio.


OK, our pipelines use ‘Docker in Docker’ as a means of constructing a docker image while inside a ‘stage’ that is itself a docker image. Why?

  1. I don’t want to expose the ‘Node’ docker socket to the pipelines, since if you can access the docker socket you are root. Security!
  2. Docker has a flaw design feature that means you must ‘build’ using a running docker daemon (and thus root). Yes I’m aware a few folks have started to work around it, but for everyday use ‘docker build .’ requires a running docker daemon and thus root. Yes I know its just a magic tar file.

So, imagine a pipeline that looks like:

image: docker

  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375

  - name: docker:dind

  key: "${CI_BUILD_REF_SLUG}"
    - .cache/

  - mkdir -p .cache .cache/images
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
  - for i in .cache/images/*; do docker load -i $i ||true; done

  - build
  - test

  stage: build
  script: |
    docker build -t $CONTAINER_IMAGE:$CI_COMMIT_SHA .
    for i in $(docker image ls --format '{{ .Repository}}:{{ .Tag }}' |grep -v ""); do echo save $i; docker save $i > .cache/images/$(echo "$i" | sed -e 's?/?_?g'); done

  stage: test
      - reports/
  script: |
    docker run --rm $CONTAINER_IMAGE:$CI_COMMIT_SHA mytest

What is all this gibberish, and why so complex, and how did you fix it?

What this says is ‘services: … dind’. Run a ‘docker in docker’ container as a ‘sidecar’, e.g. bolted to the same namespace (and thus same localhost) as our build container (‘docker’ in this case).

Create a cache that will live between stages, called .cache/. After build, push the image there, before each stage, pull it back in.

Why do you need to pull it back in? Because each stage is a new set of containers and that ‘dind’ is gone, erased.

OK, sounds good, why this post? What about the GCS and minio?

Turns out the caching is kept *locally* on the node that runs the ‘runner’ instance. Since we are in Kubernetes (GKE), each stage will, in general, be on a different node, and thus the cache would be empty, and the 2nd stage would fail.

So there is a feature called ‘distributed caching’ of gitlab-runner, this to the rescue! But it only supports S3. OK, no problem, Google Cloud Storage supports S3? Well. Maybe read the gitlab-runner Merge Request about adding support for GCS. So, struck out.

But, there is a cool tool called Minio. Its S3 for the average folk like me. So, lets crank one of those up:

helm install --name minio --namespace minio --set accessKey=MYKEY,secretKey=MYSECRET,defaultBucket.enabled=true,,defaultBucket.purge=true,persistence.enabled=false stable/minio

OK, step 1 is done, now lets address the gitlab-runner. Add this bit to config-runner.yaml:

   cacheType: "s3"
   s3ServerAddress: "minio.minio:9000"
   s3BucketName: "my-gitlab-runner-cache"
   s3CacheInsecure: "false"
   s3CachePath: "cache"
   cacheShared: "true"
   secretName: "s3access"
   Insecure: "true"

Now create your secret. Base64 encode it.

$ cat s3Secret.yaml 
apiVersion: v1
kind: Secret
  name: s3access
type: Opaque
  accesskey: "TVlLRVkK"
  secretkey: "TVlTRUNSRVQK"
$ kubectl create --namespace gitlab-runner -f s3Secret.yaml
$ helm install --namespace gitlab-runner --name gitlab-runner -f config-runner.yaml charts/gitlab-runner

Poof, we are running. And now you have a decent idea, faster, of my afternoon.

The s3ServerAddress is host.namespace (so minio.minio for me). I chose not to make this Internet accessible (otherwise you can set the ingress fields to it). Since its not Internet accessible I cannot sign a certificate for it, so Insecure = true. I’m torn, do I expose it via the ingress and thus have it TLS for the first hop? or leave it non-TLS and not-expose it.

And that, my friends, is how I learned to stop worrying and love the cloud bomb.





Leave a Reply

Your email address will not be published. Required fields are marked *