Cloud simplicity: NOT!

(queue wayne's world music on the 'NOT!').

So. Gitlab, Gitlab-runner, Kubernetes, Google Cloud Platform, Google Kubernetes Engine, Google Cloud Storage. Helm. Minio.


OK, our pipelines use 'Docker in Docker' as a means of constructing a docker image while inside a 'stage' that is itself a docker image. Why?

  1. I don't want to expose the 'Node' docker socket to the pipelines, since if you can access the docker socket you are root. Security!
  2. Docker has a flaw design feature that means you must 'build' using a running docker daemon (and thus root). Yes I'm aware a few folks have started to work around it, but for everyday use 'docker build .' requires a running docker daemon and thus root. Yes I know its just a magic tar file.

So, imagine a pipeline that looks like:

image: docker

  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375

  - name: docker:dind

  key: "${CI_BUILD_REF_SLUG}"
    - .cache/

  - mkdir -p .cache .cache/images
  - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
  - for i in .cache/images/*; do docker load -i $i ||true; done

  - build
  - test

  stage: build
  script: |
    docker build -t $CONTAINER_IMAGE:$CI_COMMIT_SHA .
    for i in $(docker image ls --format '{{ .Repository}}:{{ .Tag }}' |grep -v ""); do echo save $i; docker save $i > .cache/images/$(echo "$i" | sed -e 's?/?_?g'); done

  stage: test
      - reports/
  script: |
    docker run --rm $CONTAINER_IMAGE:$CI_COMMIT_SHA mytest

What is all this gibberish, and why so complex, and how did you fix it?

What this says is 'services: ... dind'. Run a 'docker in docker' container as a 'sidecar', e.g. bolted to the same namespace (and thus same localhost) as our build container ('docker' in this case).

Create a cache that will live between stages, called .cache/. After build, push the image there, before each stage, pull it back in.

Why do you need to pull it back in? Because each stage is a new set of containers and that 'dind' is gone, erased.

OK, sounds good, why this post? What about the GCS and minio?

Turns out the caching is kept *locally* on the node that runs the 'runner' instance. Since we are in Kubernetes (GKE), each stage will, in general, be on a different node, and thus the cache would be empty, and the 2nd stage would fail.

So there is a feature called 'distributed caching' of gitlab-runner, this to the rescue! But it only supports S3. OK, no problem, Google Cloud Storage supports S3? Well. Maybe read the gitlab-runner Merge Request about adding support for GCS. So, struck out.

But, there is a cool tool called Minio. Its S3 for the average folk like me. So, lets crank one of those up:

helm install --name minio --namespace minio --set accessKey=MYKEY,secretKey=MYSECRET,defaultBucket.enabled=true,,defaultBucket.purge=true,persistence.enabled=false stable/minio

OK, step 1 is done, now lets address the gitlab-runner. Add this bit to config-runner.yaml:

   cacheType: "s3"
   s3ServerAddress: "minio.minio:9000"
   s3BucketName: "my-gitlab-runner-cache"
   s3CacheInsecure: "false"
   s3CachePath: "cache"
   cacheShared: "true"
   secretName: "s3access"
   Insecure: "true"

Now create your secret. Base64 encode it.

$ cat s3Secret.yaml 
apiVersion: v1
kind: Secret
  name: s3access
type: Opaque
  accesskey: "TVlLRVkK"
  secretkey: "TVlTRUNSRVQK"
$ kubectl create --namespace gitlab-runner -f s3Secret.yaml
$ helm install --namespace gitlab-runner --name gitlab-runner -f config-runner.yaml charts/gitlab-runner

Poof, we are running. And now you have a decent idea, faster, of my afternoon.

The s3ServerAddress is host.namespace (so minio.minio for me). I chose not to make this Internet accessible (otherwise you can set the ingress fields to it). Since its not Internet accessible I cannot sign a certificate for it, so Insecure = true. I'm torn, do I expose it via the ingress and thus have it TLS for the first hop? or leave it non-TLS and not-expose it.

And that, my friends, is how I learned to stop worrying and love the cloud bomb.

Upstream risk: the vanishing

Go is one of those languages that has a tight-coupling to upstream code stored in remote repo. Lets call that "Other People's Code", OPC. You write some function, it automatically pulls in (usually from github) the OPC, and away you go.

Now, after a while of this, the good people of the Go community got tired of OPC breaking. It was fun when you could just use OPC and it worked, but what if 'they' changed the API or something? Then you would have to go fix Your Code (YC). So to prevent breaking YC all the time, the greybeards of Go invented the 'Vendor'. The 'Vendor' technique is effectively a manifest file like this:

"version": 0,
 "dependencies": [
   "importpath": "",
   "repository": "",
   "vcs": "git",
   "revision": "0fb560e5f7fbcaee2f75e3c34174320709f69944",
   "branch": "master",
   "notests": true
   "importpath": "",
   "repository": "",
   "vcs": "git",
   "revision": "10f801ebc38b33738c9d17d50860f484a0988ff5",
   "branch": "master",
   "notests": true

OK, life is good again. I use the 'OPC of the day' for a while, then, when I think I'm done with YC, I lock down the dependencies. Mission Accomplished! But is it really? Consider this case that I ran into today:

 "importpath": "",
 "repository": "",
 "vcs": "git",
 "revision": "b8f878dd8851dd7b724c813f04d469fa2dae881a",
 "branch": "master",
 "path": "/circuitbreaker",
 "notests": true

The astute amongst you will recognise that b8f878dd8851dd7b724c813f04d469fa2dae881a has been rebased away. Its gone. Its an ex-commit. Its not just pining for the fjords. So I can't build.

And that's probably the best-case scenario. You see, the app I'm building has 790 upstream live dependencies. To various services (github,,,, ...). Trust all of them? Trust all of the people that have private repos on each of them? Hope that no-one figures out how to do a SHA1 hash attack? I mean, its not like that hasn't been demonstrated. So someone can rewrite history in git (the rebase), why not rewrite it so the hash collides and I get 'bad' code?

In the meantime, well, I guess I binary search down a nearby commit hash in this repo and find one that is 'good enough'.

I'm glad that YC is perfect, if only OPC was. Wait, MC is OPC from your standpoint? And OPC from my standpoint might be YC? Confusing!

The mysterious life of outdoor cats, solved, sadly

~4 years of outdoor kitty. His (heated) house to keep him warm, the live temperature feed, etc. So many mysteries. Where did he go when he would be off for days? Each time, would it be the last time we saw him?

We last saw him a couple of weeks ago. No big deal, its warm, we left water out.

Tonight we talked with the other family that has also been caring for him for the last couple of years. He'd been cheating on us! Neither of us knew about the other. They had also bought a house for him outside to keep him warm. Sadly, he came back to them last week w/ an infection, sick. After some time at the vet, well, he didn't make it.

RIP you mysterious two-timing cat.

Suffering sisyphean security solutions: make your chrome part of the solution

OK, its no real secret by now that the WWW is a cesspool of stuff. Its not all /r/aww. As an end user, you don't see the mountain of (typically javascript) that is executed. Or worse, where it comes from, and how it is maintained. So you don't act as a 'push back' mechanism on the web site owners, voting w/ your feed or wallet to avoid sites that put you at risk. And thus the invisible hand is stayed.

But you, yes you, can be part of the solution. And its not hard, it just involves a coloured emoji. Sign me up you say!

Well, for Chrome (I didn't test but there is a method for Firefox), you can install this extension. Want to do it from source and see what you are getting? Github is your friend!

So what happens is you surf around. Suddenly a site with some vulnerabilities crops up. O noes, people could steal your deets. The icon changes, you snoop the list (see the screenshot). You then pen a magnificent letter to the 'admin@' of the site, they see the error of their ways and update their gruntfile or whatever, and boom, that site has been inoculated. The herd immunity starts to kick in. Soon the web is a delightful place (again) full of 'under construction' animated gifs and dancing babies.

The example above is a real one, my wiki. Now I know I need to update my bootstrap and jquery.

Brief, but delightful -- such as had not staid long with her destiny -- the javascript crook sleeps well

Can’t beat em? join em! Brick and mortar and online

One of my favourite 'brick and mortar' stores is Canada Computers. But you knew that 🙂 Its great, they have decent prices, decent selection, and don't hassle you with a lot of questions. Its for someone who knows what they are buying. They do allow ordering online, but why, they are right down the street.

One of my favourite online stores is Amazon (and Aliexpress and Ebay, the troika). Imagine my surprise as I'm trying to sort out a small form-factor desktop to replace (RIP) the mini pc that couldn't AVX (the one has just been picked up to return to its maker in Shenzen). As I'm trying to sort out the ram config, I see this image to the right. Colour me shocked, I mean, there's no reason why not I suppose, but... really?

Oh yeah, happy 4th to all you US'rs. Hope the hotdogs and fireworks follow tradition, w/ only one exploding in the sky.