Phishing has hit the halfway point on encryption. This means that being TLS-encrypted is no indication a site is real or not (its an indication that it is exactly what it says it is, but not what it might appear as).

Ironically, they might be stronger than the average web site. If we look at, we find some big ticket names that are not encrypted. I’m looking at you (interestingly they do support encryption, but don’t turn it on unless you force it). There’s a workaround (install HTTPS Everywhere as a chrome add-on).

Now, the percent of pages fetched, and of browsing time, is high. See the Google Transparency report. But this is an 80/20 type thing. A small number of sites capture the majority of time, but its the other sites that you get phished and leaked from.

Lets take a look by country. For Canada, there’s a set of non-https sites. Some are owned by our federal government ( Who’s up for taking their favourite site, checking whether it:

  1. Is available in HTTPS
  2. Is *only* available in HTTPS (or redirects all non HTTPS to the HTTPS version)
  3. Has HSTS enabled?
  4. Has a strong certificate?

Its easy, head on over to and run a quick check. If its not an A, maybe write to their IT admin and ask why not.

Courtesy of our friends @ Google and their Transparency Report we see that Canada is 89% encrypted to Google. Good, but not great when you realise the UK is 97% encrypted. What could drive this difference? I would think device-types and ages would be similar. This traffic is a bellwether of other encrypted traffic, and we want it to be 100%.

Anyone got any comment?

Image result for moebius stripTomorrow (Tues 27, 2018) we’re going to have the next meetup to talk Continuous Integration. Got a burning desire to rant about the flakiness of an infinite number of shell scripts bundled into a container and shipped to a remote agent that is more or less busy at different hours?

Wondering if its better to use Travis, Circle, Gitlab, Jenkins, with a backend of OpenStack, Kubernetes, AWS, …?

We’ve got 3 short prepared bits of material and a chance to snoop around the shiny new offices of SSIMWAVE.

So, in the Greater Waterloo area and got some time tomorrow night to talk tech? Check the link.

So I have this architecture where we have 2 separate Kubernetes clusters. The first cluster runs in GKE, the second on ‘the beast of the basement‘ (and then there’s a bunch in AKS but they are for different purposes). I run Gitlab-runner on these 2 Kubernetes clusters. But… trouble is brewing.

You see, runners are not sticky. And, you have multiple stages that might need to share info. There are 2 methods built-in to gitlab-runner to use, 1 is the ‘cache’ and 1 is ‘artifacts’. The general workflow is ‘build… test… save… deploy…’. But ‘test’ has multiple parallel phases (SAST, Unit, DAST, System, Sanitisers, …).

So right now I’m using this pattern:

But a problem creeps in if ‘build’ runs on a different cluster than ‘linux_asan’. Currently I use the ‘cache’ (courtesy of minio) to save the build data to bootstrap the other phases. For this particular pipeline, if the cache is empty, each of the test phases works, at reduced efficiency.

However, I have other pipelines where the ‘build’ stage creates a docker image. In this case, the cache holds ‘docker save’ and the pre- script runs ‘docker load’ (using DinD). When the subsequent stages run on a different cluster, they actually fail instead.

So… Solutions.

  1. Run a single cache (in GKE). Both clusters use it. All works, but the performance of cache-upload from the 2nd cluster is poor
  2. Delete the GKE cluster gitlab-runner entirely
  3. Figure out a hierarchical-cache using ATS or Squid
  4. Use the ‘registry’, on build, push w/ some random tag, after test fix the tag
  5. Use the ‘artifacts’ for this. Its suffers from the same cache-upload speed issue
  6. Go back to the original issue that cause the deployment of the self-run Kubernetes cluster and find a cost-economical way to have large build machines dynamically (I tried Azure virtual kubelet but they are nowhere close to big enough, I tried Google Builder but its very complex to insert into this pipeline).


The problem is the basement-machine has 1Gbps downstream, but only 50Mbps upstream. And some of these cache items are multi-GiB.