So the other day I wrote of my experience with the first 'critical' kubernetes bug, and how the mitigation took down my Google Kubernetes (GKE). In that case, Google pushed an upgrade, and missed something with the Calico migration (Calico had been installed by them as well, nothing had been changed by me). Ooops.
Today, Azure AKS. Errors like:
"heapster" is forbidden: User "system:serviceaccount:kube-system:heapster" cannot update deployments.extensions in the namespace "kube-system
start appearing. Along with a mysterious 'server is misbehaving' message associated with 'exec' to a single namespace (other namespaces are ok, and non-exec calls within this namespace are ok). Hmm.
Some online 'research' and we are lead to Issue#664.
Looking deeper at the 'server misbehaving' leads to some discussion about kube-dns being broken. Kube-system shows errors like:
Node aks-nodepool1-19254313-0 has no valid hostname and/or IP address: aks-nodepool1-19254313-0
Hmm. That is my node name, how could it loose track of its own hostname? I don't even have (easy) access to this, its all managed.
OK, unpack the 'access one azure node' here. And we're in to the assumed 'sick' node. Snoop around, nothing seems too wrong.
So... peanut gallery, what does one do? Delete the cluster and move on with life? Open a support ticket?
Another day another piece of infrastructure cowardly craps out. Today it was Google GKE. It updated itself to 1.11.3-gke.18, and then had this to say (while nothing was working, all pods were stuck in Creating, and the Nodes would not come online since the CNI failed).
2018-12-03 21:10:57.996 [ERROR] migrate.go 884: Unable to store the v3 resources
2018-12-03 21:10:57.996 [ERROR] daemon.go 288: Failed to migrate Kubernetes v1 configuration to v3 error=error storing converted data: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)
Um, ok. Yes I am running 'Project Calico' in my GKE. The intent was to have network policy available. My cluster dates from 'ancient times' of april 2018.
I did some searching online, and found nothing. So in desperation I did what most of you would have done, just created the CRD, as below. And, miracle of miracles, I'm back and running. If this fixes you, you're welcome. If you came here to tell me what dire thing is ahead of me for manually messing with a managed product, I'd love to know that too.
Tomorrow (Tues 27, 2018) we're going to have the next meetup to talk Continuous Integration. Got a burning desire to rant about the flakiness of an infinite number of shell scripts bundled into a container and shipped to a remote agent that is more or less busy at different hours?
Wondering if its better to use Travis, Circle, Gitlab, Jenkins, with a backend of OpenStack, Kubernetes, AWS, ...?
We've got 3 short prepared bits of material and a chance to snoop around the shiny new offices of SSIMWAVE.
So, in the Greater Waterloo area and got some time tomorrow night to talk tech? Check the link.
So I have this architecture where we have 2 separate Kubernetes clusters. The first cluster runs in GKE, the second on 'the beast of the basement' (and then there's a bunch in AKS but they are for different purposes). I run Gitlab-runner on these 2 Kubernetes clusters. But... trouble is brewing.
You see, runners are not sticky. And, you have multiple stages that might need to share info. There are 2 methods built-in to gitlab-runner to use, 1 is the 'cache' and 1 is 'artifacts'. The general workflow is 'build... test... save... deploy...'. But 'test' has multiple parallel phases (SAST, Unit, DAST, System, Sanitisers, ...).
So right now I'm using this pattern:
But a problem creeps in if 'build' runs on a different cluster than 'linux_asan'. Currently I use the 'cache' (courtesy of minio) to save the build data to bootstrap the other phases. For this particular pipeline, if the cache is empty, each of the test phases works, at reduced efficiency.
However, I have other pipelines where the 'build' stage creates a docker image. In this case, the cache holds 'docker save' and the pre- script runs 'docker load' (using DinD). When the subsequent stages run on a different cluster, they actually fail instead.
Run a single cache (in GKE). Both clusters use it. All works, but the performance of cache-upload from the 2nd cluster is poor
Delete the GKE cluster gitlab-runner entirely
Figure out a hierarchical-cache using ATS or Squid
Use the 'registry', on build, push w/ some random tag, after test fix the tag
Use the 'artifacts' for this. Its suffers from the same cache-upload speed issue
Go back to the original issue that cause the deployment of the self-run Kubernetes cluster and find a cost-economical way to have large build machines dynamically (I tried Azure virtual kubelet but they are nowhere close to big enough, I tried Google Builder but its very complex to insert into this pipeline).
The problem is the basement-machine has 1Gbps downstream, but only 50Mbps upstream. And some of these cache items are multi-GiB.