Tag: container

This might be a little detailed for most, feel free to don your peril-sensitive sunglasses.

So, no offence to Debian 9.6 Stretch, but the rest of the fleet runs Ubuntu, which is very similar, but, well, some packages are different.

So lets see how we can make the Chromebook run a Ubuntu image and still have Wayland support and file sharing. There are a bunch of pages (cros-*) that add these things 'sommelier', 'garcon', 'wayland', etc. When your image is loaded in lxc, a /dev/.ssh mount is loaded which contains well-known keys for the username that matches the name of your *first* login account. For me, 'db'. I think you are kind of stuck with this name. The ssh keys are used for 'sshfs' which is how the Files app gets to your home dir within the container.

Now, lets try building a container that matches expectations.

lxc image copy ubuntu:18.04 local: --alias bionic
lxc launch bionic cros

lxc exec cros -- bash

Now we are in the being-prepped container, run:

echo "deb https://storage.googleapis.com/cros-packages stretch main" > /etc/apt/sources.list.d/cros.list
if [ -f /dev/.cros_milestone ]; then sudo sed -i "s?packages?packages/$(cat /dev/.cros_milestone)?" /etc/apt/sources.list.d/cros.list; fi
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1397BC53640DB551
apt update
apt install -y binutils adwaita-icon-theme-full 

apt download cros-ui-config
ar x cros-ui-config_0.12_all.deb data.tar.gz
gunzip data.tar.gz
tar f data.tar --delete ./etc/gtk-3.0/settings.ini
gzip data.tar
ar r cros-ui-config_0.12_all.deb data.tar.gz
rm -rf data.tar.gz

mkdir -p /opt/google/cros-containers/bin/sommelier
mkdir -p /opt/google/cros-containers/lib/
apt install -y libgl1-mesa-dri
cp /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so /opt/google/cros-containers/lib/

apt install -y cros-adapta cros-apt-config cros-garcon cros-guest-tools cros-sftp cros-sommelier cros-sommelier-config cros-sudo-config cros-systemd-overrides ./cros-ui-config_0.12_all.deb cros-unattended-upgrades cros-wayland
rm -rf cros-ui-config_0.12_all.deb
sed -i 's/Ambiance/CrosAdapta/' /etc/gtk-3.0/settings.ini
sed -i 's/ubuntu-mono-dark/CrosAdapta/' /etc/gtk-3.0/settings.ini
sed -i 's/gtk-sound-theme-name = ubuntu/gtk-font-name = Roboto 11/' /etc/gtk-3.0/settings.ini
sed -i '5d' /etc/gtk-3.0/settings.ini
sed -i -n '2{h;n;G};p' /etc/gtk-3.0/settings.ini
echo chronos-access:x:1001:db >> /etc/group

echo penguin > /etc/hostname

killall -u ubuntu
groupmod -n db ubuntu
usermod -md /home/db -l db ubuntu
usermod -aG users db
loginctl enable-linger db
sed -i 's/ubuntu/db/' /etc/sudoers.d/90-cloud-init-users
shutdown -h now

Now we are back to the host, run:

lxc publish cros --alias cros
lxc image export cros cros

Now, manually, put on USB, move to chromebook, copy to the default 'penguin' using the Files app

Now from the Termina:

lxc file pull penguin/home/db/cros.tar.gz $LXD_CONF

lxc stop --force penguin
lxc rename penguin google

lxc image import $LXD_CONF/cros.tar.gz --alias cros
lxc init cros penguin

OK we are done. And it worked. I now have a Ubuntu 18.04 image running, with file-sharing, and wayland for X-like stuff. I installed 'rxvt-unicode', and added a .Xdefaults file with a suitably large font (34) to overcome the DPI.


Tagged with: , ,

Another day another piece of infrastructure cowardly craps out. Today it was Google GKE. It updated itself to 1.11.3-gke.18, and then had this to say (while nothing was working, all pods were stuck in Creating, and the Nodes would not come online since the CNI failed).

2018-12-03 21:10:57.996 [ERROR][12] migrate.go 884: Unable to store the v3 resources
2018-12-03 21:10:57.996 [ERROR][12] daemon.go 288: Failed to migrate Kubernetes v1 configuration to v3 error=error storing converted data: resource does not exist: BGPConfiguration(default) with error: the server could not find the requested resource (post BGPConfigurations.crd.projectcalico.org)

Um, ok. Yes I am running 'Project Calico' in my GKE. The intent was to have network policy available. My cluster dates from 'ancient times' of april 2018.

I did some searching online, and found nothing. So in desperation I did what most of you would have done, just created the CRD, as below. And, miracle of miracles, I'm back and running. If this fixes you, you're welcome. If you came here to tell me what dire thing is ahead of me for manually messing with a managed product, I'd love to know that too.

kubectl -n kube-system create -f - << EOF
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
  name: bgpconfigurations.crd.projectcalico.org
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration
Tagged with: , , , , ,

Image result for moebius strip

Tomorrow (Tues 27, 2018) we're going to have the next meetup to talk Continuous Integration. Got a burning desire to rant about the flakiness of an infinite number of shell scripts bundled into a container and shipped to a remote agent that is more or less busy at different hours?

Wondering if its better to use Travis, Circle, Gitlab, Jenkins, with a backend of OpenStack, Kubernetes, AWS, ...?

We've got 3 short prepared bits of material and a chance to snoop around the shiny new offices of SSIMWAVE.

So, in the Greater Waterloo area and got some time tomorrow night to talk tech? Check the link.

Tagged with: , , , , , ,

So I have this architecture where we have 2 separate Kubernetes clusters. The first cluster runs in GKE, the second on 'the beast of the basement' (and then there's a bunch in AKS but they are for different purposes). I run Gitlab-runner on these 2 Kubernetes clusters. But... trouble is brewing.

You see, runners are not sticky. And, you have multiple stages that might need to share info. There are 2 methods built-in to gitlab-runner to use, 1 is the 'cache' and 1 is 'artifacts'. The general workflow is 'build... test... save... deploy...'. But 'test' has multiple parallel phases (SAST, Unit, DAST, System, Sanitisers, ...).

So right now I'm using this pattern:

But a problem creeps in if 'build' runs on a different cluster than 'linux_asan'. Currently I use the 'cache' (courtesy of minio) to save the build data to bootstrap the other phases. For this particular pipeline, if the cache is empty, each of the test phases works, at reduced efficiency.

However, I have other pipelines where the 'build' stage creates a docker image. In this case, the cache holds 'docker save' and the pre- script runs 'docker load' (using DinD). When the subsequent stages run on a different cluster, they actually fail instead.

So... Solutions.

  1. Run a single cache (in GKE). Both clusters use it. All works, but the performance of cache-upload from the 2nd cluster is poor
  2. Delete the GKE cluster gitlab-runner entirely
  3. Figure out a hierarchical-cache using ATS or Squid
  4. Use the 'registry', on build, push w/ some random tag, after test fix the tag
  5. Use the 'artifacts' for this. Its suffers from the same cache-upload speed issue
  6. Go back to the original issue that cause the deployment of the self-run Kubernetes cluster and find a cost-economical way to have large build machines dynamically (I tried Azure virtual kubelet but they are nowhere close to big enough, I tried Google Builder but its very complex to insert into this pipeline).


The problem is the basement-machine has 1Gbps downstream, but only 50Mbps upstream. And some of these cache items are multi-GiB.

Tagged with: , , , , , ,

So you might have cut and paste some code from somewhere, maybe an 'from launcher.gcr.io/debian9' kind of thing. That's a good upstream, right? They are maintaining it with a strong CI? When suddenly you read

Hmm. Double whammy. You have been relying since 2018-07-18 on something which is not being updated (and daily rebuilding your tool, running SAST, etc... and never noticed. Shame on you!). But also, the recommended replacement requires GCP credentials which you don't have in your OSS build environment?

Well at least now you know, and you can probably replace this with debian/stretch and be happy (the Dockerhub one is maintained by the Debian team).

I, of course, would never have made this mistake, and for sure if I had I would have noticed the upstream was never changing in that time 🙂

Tagged with: , , , ,