Tag: kubernetes

So the other day I wrote of my experience with the first 'critical' kubernetes bug, and how the mitigation took down my Google Kubernetes (GKE). In that case, Google pushed an upgrade, and missed something with the Calico migration (Calico had been installed by them as well, nothing had been changed by me). Ooops.

Today, Azure AKS. Errors like:

"heapster" is forbidden: User "system:serviceaccount:kube-system:heapster" cannot update deployments.extensions in the namespace "kube-system

start appearing. Along with a mysterious 'server is misbehaving' message associated with 'exec' to a single namespace (other namespaces are ok, and non-exec calls within this namespace are ok). Hmm.

Some online 'research' and we are lead to Issue#664.

Looking deeper at the 'server misbehaving' leads to some discussion about kube-dns being broken. Kube-system shows errors like:

Node aks-nodepool1-19254313-0 has no valid hostname and/or IP address: aks-nodepool1-19254313-0 

Hmm. That is my node name, how could it loose track of its own hostname? I don't even have (easy) access to this, its all managed.

OK, unpack the 'access one azure node' here. And we're in to the assumed 'sick' node. Snoop around, nothing seems too wrong.

So... peanut gallery, what does one do? Delete the cluster and move on with life? Open a support ticket?

Should I...

View Results

Loading ... Loading ...
Tagged with: , , ,

Another day another piece of infrastructure cowardly craps out. Today it was Google GKE. It updated itself to 1.11.3-gke.18, and then had this to say (while nothing was working, all pods were stuck in Creating, and the Nodes would not come online since the CNI failed).

2018-12-03 21:10:57.996 [ERROR][12] migrate.go 884: Unable to store the v3 resources
2018-12-03 21:10:57.996 [ERROR][12] daemon.go 288: Failed to migrate Kubernetes v1 configuration to v3 error=error storing converted data: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)

Um, ok. Yes I am running 'Project Calico' in my GKE. The intent was to have network policy available. My cluster dates from 'ancient times' of april 2018.

I did some searching online, and found nothing. So in desperation I did what most of you would have done, just created the CRD, as below. And, miracle of miracles, I'm back and running. If this fixes you, you're welcome. If you came here to tell me what dire thing is ahead of me for manually messing with a managed product, I'd love to know that too.

kubectl -n kube-system create -f - << EOF
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration
EOF
Tagged with: , , , , ,

Image result for moebius strip

Tomorrow (Tues 27, 2018) we're going to have the next meetup to talk Continuous Integration. Got a burning desire to rant about the flakiness of an infinite number of shell scripts bundled into a container and shipped to a remote agent that is more or less busy at different hours?

Wondering if its better to use Travis, Circle, Gitlab, Jenkins, with a backend of OpenStack, Kubernetes, AWS, ...?

We've got 3 short prepared bits of material and a chance to snoop around the shiny new offices of SSIMWAVE.

So, in the Greater Waterloo area and got some time tomorrow night to talk tech? Check the link.

Tagged with: , , , , , ,

So I have this architecture where we have 2 separate Kubernetes clusters. The first cluster runs in GKE, the second on 'the beast of the basement' (and then there's a bunch in AKS but they are for different purposes). I run Gitlab-runner on these 2 Kubernetes clusters. But... trouble is brewing.

You see, runners are not sticky. And, you have multiple stages that might need to share info. There are 2 methods built-in to gitlab-runner to use, 1 is the 'cache' and 1 is 'artifacts'. The general workflow is 'build... test... save... deploy...'. But 'test' has multiple parallel phases (SAST, Unit, DAST, System, Sanitisers, ...).

So right now I'm using this pattern:

But a problem creeps in if 'build' runs on a different cluster than 'linux_asan'. Currently I use the 'cache' (courtesy of minio) to save the build data to bootstrap the other phases. For this particular pipeline, if the cache is empty, each of the test phases works, at reduced efficiency.

However, I have other pipelines where the 'build' stage creates a docker image. In this case, the cache holds 'docker save' and the pre- script runs 'docker load' (using DinD). When the subsequent stages run on a different cluster, they actually fail instead.

So... Solutions.

  1. Run a single cache (in GKE). Both clusters use it. All works, but the performance of cache-upload from the 2nd cluster is poor
  2. Delete the GKE cluster gitlab-runner entirely
  3. Figure out a hierarchical-cache using ATS or Squid
  4. Use the 'registry', on build, push w/ some random tag, after test fix the tag
  5. Use the 'artifacts' for this. Its suffers from the same cache-upload speed issue
  6. Go back to the original issue that cause the deployment of the self-run Kubernetes cluster and find a cost-economical way to have large build machines dynamically (I tried Azure virtual kubelet but they are nowhere close to big enough, I tried Google Builder but its very complex to insert into this pipeline).

Others?

The problem is the basement-machine has 1Gbps downstream, but only 50Mbps upstream. And some of these cache items are multi-GiB.

Tagged with: , , , , , ,

So one of the upstream projects I am working on has added some new tests. Should be a good thing, right?

Suddenly, out of nowhere, we start getting 'terminated 137' on CI stages. The obscure unix math is... substract 128 to get the signal. So kill -9 (see here for why, tl;dr: 8-bit, 0-128==normal return, 129-255==abnormal return).

OK, lets talk about how we run this. We are using Gitlab-CI with gitlab-runner with Kubernetes executor. This means that our jobs scale with our Kubernetes clusters. For the 'big' things, we have a big node (2 x 18C36T w/ 256GiB). That's right, 72 cores and 256GiB of non-oversubscribed system. You would think this be enough for the average codebase.

But then enters Bazel. The big fat java-based bully of the build playground. And it consumes... 46G of VIRT and 3G of PHYS just to manage things, and about 2 full time processors. But still we got space.

And then of course we parallel some of the stages. See in the image for what we allow to parallel. But, the linux_asan, linux_tsan, test are the big 3 (all running the same suite with different sanitizer flags).

OK. We are not getting OOM messages. So we are not out of memory. And a ton of graphing with vmstat and netdata prove that hypothesis. But that is the expected reason for a kill -9. Hmm.

If we look at top during one of the runs, we see the not-yet-too-common 't' size. That's right, two of the things have malloc'd 20TiB of memory. Hmm.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                    
49444 root      20   0 46.130g 2.634g  21152 S 124.6  1.0  12:22.89 java                                                                                                                                                       
  742 root      20   0  759984 676364  61752 R 100.0  0.3   0:12.71 clang-7                                                                                                                                                    
  760 root      20   0  745848 663384  62004 R 100.0  0.3   0:12.68 clang-7                                                                                                                                                    
 2344 root      20   0 20.000t 234276 147720 R  64.9  0.1   0:01.98 server_test                                                                                                                                                
 2362 root      20   0 20.000t 226140 136812 S  61.6  0.1   0:01.88 websocket_integ                                                                                                                                            
 2371 root      20   0  200628 125832  54136 R  60.7  0.0   0:01.85 clang-7         

We dig a bit further and find that 'KSM' is doing really well. This is 'Kernel Samepage Merging', see the below. This means we are getting 60% more ram for free!

Digging some more, we find the (likely) smoking gun. Looking @ kubectl top pods, we find that the 3 tests are each using more memory (according to kube-metrics-server) than they actually are, and that when the sum of them exceeds the physical memory, one of them gets terminated by kubernetes.

So... kubernetes wants you to disable swap. its very opinionated on this subject (and we are not swapping here). But it seems to have miscalculated the 'vm.overcommit' and 'ksm' affects, thus being too pessimistic, and terminating what were otherwise happy pods.

We had another issue. Initially each pod (which is a container, which is to say, not virtualised, sees the host kernel etc) thought it had 72VCPU to play with, and went nuts in parallel. So all 5 pods running w/ 72VCPU caused some thrashing. We tamed them by capping them @ 24VCPU, ironically making it faster.

So... What is the solution? I end up with each of the parallel phases at some time thinking its using ~180GiB of ram (on a 256GiB machine). I can unparallel the stages, but that is unnecessarily pessimistic. It also means that if I grow the cluster the speed won't increase.

Likewise I can instruct gitlab runner to cap the number of jobs, but that is very wasteful and slow.

I can continue to dig into kubelet and try and figure out why it is confused.

Any suggestions from the peanut gallery?

Tagged with: , , , , ,