So the other day I wrote of my experience with the first 'critical' kubernetes bug, and how the mitigation took down my Google Kubernetes (GKE). In that case, Google pushed an upgrade, and missed something with the Calico migration (Calico had been installed by them as well, nothing had been changed by me). Ooops.

Today, Azure AKS. Errors like:

"heapster" is forbidden: User "system:serviceaccount:kube-system:heapster" cannot update deployments.extensions in the namespace "kube-system

start appearing. Along with a mysterious 'server is misbehaving' message associated with 'exec' to a single namespace (other namespaces are ok, and non-exec calls within this namespace are ok). Hmm.

Some online 'research' and we are lead to Issue#664.

Looking deeper at the 'server misbehaving' leads to some discussion about kube-dns being broken. Kube-system shows errors like:

Node aks-nodepool1-19254313-0 has no valid hostname and/or IP address: aks-nodepool1-19254313-0 

Hmm. That is my node name, how could it loose track of its own hostname? I don't even have (easy) access to this, its all managed.

OK, unpack the 'access one azure node' here. And we're in to the assumed 'sick' node. Snoop around, nothing seems too wrong.

So... peanut gallery, what does one do? Delete the cluster and move on with life? Open a support ticket?

Should I...

View Results

Loading ... Loading ...
Tagged with: , , ,

Or perhaps you were too busy buying i-Tunes cards to pay off that CRA debt you didn't know you had? (hint: there is never a reason to do this!)

I'm a bit focused these days on p19 (supply chain process), but you might be more interested in p22 (critical infrastructure). But I'm intrigued by the MSP threat channel and the assessment that they will be attractive targets. There's a specific alert for this.

For many of you, following the tips on Cyber hygiene will be the most concrete thing.

So, read, learn, enjoy, share.

Tagged with:

Another day another piece of infrastructure cowardly craps out. Today it was Google GKE. It updated itself to 1.11.3-gke.18, and then had this to say (while nothing was working, all pods were stuck in Creating, and the Nodes would not come online since the CNI failed).

2018-12-03 21:10:57.996 [ERROR][12] migrate.go 884: Unable to store the v3 resources
2018-12-03 21:10:57.996 [ERROR][12] daemon.go 288: Failed to migrate Kubernetes v1 configuration to v3 error=error storing converted data: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)

Um, ok. Yes I am running 'Project Calico' in my GKE. The intent was to have network policy available. My cluster dates from 'ancient times' of april 2018.

I did some searching online, and found nothing. So in desperation I did what most of you would have done, just created the CRD, as below. And, miracle of miracles, I'm back and running. If this fixes you, you're welcome. If you came here to tell me what dire thing is ahead of me for manually messing with a managed product, I'd love to know that too.

kubectl -n kube-system create -f - << EOF
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration
EOF
Tagged with: , , , , ,

Lately I've been talking a lot about the supply chain risk. You import some software, and are suddenly importing their business model and practices. Well, we've just had another 'shenanigan' unveiled. And its got some good drama. https://github.com/dominictarr/event-stream/issues/116

In a nutshell there is some package which is relatively stable. The original developer doesn't use it anymore, its stagnant, but people are still using it. Someone comes alone, offers to take over the maintaining, its handed off.

The new dev makes a quick minor fix, updates the package. And then... makes some evil changes, pushes them on top of that version and then hides it.

And now all your bitcoin are belong to unknown dev #2.

And lots of things are broken. One person in the thread (perhaps a troll) is complaining because he has some software which has pinned the dependency to the bad version, and now won't build in their CI.

Millions of people have installed this software, somewhere, everywhere, no real way to say.

Since there is no signing, we can't say what was in github when it was published, or if it came from github or elsewhere, so the source is murky. There's some encryption, some patching of dependencies. You could have this and not know. You can't trust the 'npm ls' to show the truth.

So this brings up the question of open source governance. A strong project that is hard to breach has multiple maintainers, who are unrelated to each other, and who it takes more than one of reviewing a commit to get it approved. Is this something you look for when you choose your immediate upstreams? Do your upstreams look for this? Do their upstreams? And so on. The tree is both deep and wide.

Red team only has to get it right once, blue team needs to be right 100% of the time.

Commence worrying in 3...2...1.

Tagged with: , ,

Hint: you want your email to be encrypted in transit. Now, lets take a look at some stats. From my earlier post about 'Why is Canada less encrypted than the US'?, and from Google's Transparency Report, we dig into Sympatico. This is Bell Canada's brand for Internet. We see that there is no encrypted email exchanged to Bell from Google (so your friend with a Gmail account mails you on your Sympatico account).

Gobsmacked, I double checked this. First we find the mail exchanger (as below), and then we head to https://www.checktls.com/. Story checks out. Bell does not allow encryption in transit of your email, from anywhere in the world.

$ nslookup
> set q=mx
> sympatico.ca.
Server:		127.0.0.53
Address:	127.0.0.53#53

Non-authoritative answer:
sympatico.ca	mail exchanger = 0 mxmta.owm.bell.net.


 

Tagged with: , , , ,