This might be a little detailed for most, feel free to don your peril-sensitive sunglasses.

So, no offence to Debian 9.6 Stretch, but the rest of the fleet runs Ubuntu, which is very similar, but, well, some packages are different.

So lets see how we can make the Chromebook run a Ubuntu image and still have Wayland support and file sharing. There are a bunch of pages (cros-*) that add these things ‘sommelier’, ‘garcon’, ‘wayland’, etc. When your image is loaded in lxc, a /dev/.ssh mount is loaded which contains well-known keys for the username that matches the name of your *first* login account. For me, ‘db’. I think you are kind of stuck with this name. The ssh keys are used for ‘sshfs’ which is how the Files app gets to your home dir within the container.

Now, lets try building a container that matches expectations.


lxc image copy ubuntu:18.04 local: --alias bionic
lxc launch bionic cros

lxc exec cros -- bash

Now we are in the being-prepped container, run:

echo "deb https://storage.googleapis.com/cros-packages stretch main" > /etc/apt/sources.list.d/cros.list
if [ -f /dev/.cros_milestone ]; then sudo sed -i "s?packages?packages/$(cat /dev/.cros_milestone)?" /etc/apt/sources.list.d/cros.list; fi
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1397BC53640DB551
apt update
apt install -y binutils adwaita-icon-theme-full 

apt download cros-ui-config
ar x cros-ui-config_0.12_all.deb data.tar.gz
gunzip data.tar.gz
tar f data.tar --delete ./etc/gtk-3.0/settings.ini
gzip data.tar
ar r cros-ui-config_0.12_all.deb data.tar.gz
rm -rf data.tar.gz

mkdir -p /opt/google/cros-containers/bin/sommelier
mkdir -p /opt/google/cros-containers/lib/
apt install -y libgl1-mesa-dri
cp /usr/lib/x86_64-linux-gnu/dri/swrast_dri.so /opt/google/cros-containers/lib/

apt install -y cros-adapta cros-apt-config cros-garcon cros-guest-tools cros-sftp cros-sommelier cros-sommelier-config cros-sudo-config cros-systemd-overrides ./cros-ui-config_0.12_all.deb cros-unattended-upgrades cros-wayland
rm -rf cros-ui-config_0.12_all.deb
sed -i 's/Ambiance/CrosAdapta/' /etc/gtk-3.0/settings.ini
sed -i 's/ubuntu-mono-dark/CrosAdapta/' /etc/gtk-3.0/settings.ini
sed -i 's/gtk-sound-theme-name = ubuntu/gtk-font-name = Roboto 11/' /etc/gtk-3.0/settings.ini
sed -i '5d' /etc/gtk-3.0/settings.ini
sed -i -n '2{h;n;G};p' /etc/gtk-3.0/settings.ini
echo chronos-access:x:1001:db >> /etc/group

echo penguin > /etc/hostname

killall -u ubuntu
groupmod -n db ubuntu
usermod -md /home/db -l db ubuntu
usermod -aG users db
loginctl enable-linger db
sed -i 's/ubuntu/db/' /etc/sudoers.d/90-cloud-init-users
shutdown -h now

Now we are back to the host, run:

lxc publish cros --alias cros
lxc image export cros cros

Now, manually, put on USB, move to chromebook, copy to the default ‘penguin’ using the Files app

Now from the Termina:

lxc file pull penguin/home/db/cros.tar.gz $LXD_CONF

lxc stop --force penguin
lxc rename penguin google

lxc image import $LXD_CONF/cros.tar.gz --alias cros
lxc init cros penguin

OK we are done. And it worked. I now have a Ubuntu 18.04 image running, with file-sharing, and wayland for X-like stuff. I installed ‘rxvt-unicode’, and added a .Xdefaults file with a suitably large font (34) to overcome the DPI.

 

Tagged with: , ,

All of this is done in the ‘penguin’ container of ‘termina’ (e.g. enable ‘linux’ on the chrome settings). By default its Debian 9.6, and runs Python 3.5. But you might want to run e.g. Quart, which wants a newer rev for some asyncio. So, here goes.

Step 1: Install dev essentials, as root (e.g. sudo)

apt-get update
apt-get install -y build-essential libncurses5-dev libgdbm-dev libnss3-dev libssl-dev libreadline-dev libffi-dev zlib1g-dev

Step 2: Install clang/llvm, as root (e.g. sudo)

echo deb http://apt.llvm.org/stretch/ llvm-toolchain-stretch-7 main > /etc/apt/sources.list.d/llvm.list
wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add -

apt-get update
apt-get install -y libllvm-7-ocaml-dev libllvm7 llvm-7 llvm-7-dev llvm-7-doc llvm-7-examples llvm-7-runtime clang-7 clang-tools-7 clang-7-doc libclang-common-7-dev libclang-7-dev libclang1-7 clang-format-7 python-clang-7 libfuzzer-7-dev lldb-7 lld-7 libc++-7-dev libc++abi-7-dev libomp-7-dev

update-alternatives --install /usr/bin/llvm-profdata llvm-profdata /usr/bin/llvm-profata-7 90
update-alternatives --install /usr/bin/clang clang /usr/bin/clang-7 90
update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-7 90

Step 3: Get & Install Python

wget https://www.python.org/ftp/python/3.7.1/Python-3.7.1.tgz
tar -xzvf Python-3.7.1.tgz
./configure --enable-optimizations
make -j4
make install

And you have yourself some Python 3.7! Pip on!

Tagged with: , ,

If you have a mild allergy to ascii or yaml you might want to avert your eyes. You’ve been warned.

Now, lets imagine you have a largish server hanging around, not earning its keep. And on the other hand, you have a desire to run some CI pipelines on it, and think Kubernetes is the answer.

You’ve tried ‘kube-spawn’ and ‘minikube’ etc, but they stubbornly allocate just a ipv4/32 to your container, and, well, your CI job does something ridiculous like bind to ::1, failing miserably. Don’t despair, lets use Calico with a host-local ipam.

For the most part the recipe speaks for itself. The ‘awk’ in the calico install is to switch from calico-ipam (single-stack) to host-local with 2 sets of ranges. Technically Kubernetes doesn’t support dual stack (cloud networking is terrible. Just terrible. its all v4 and proxy server despite sometimes using advanced things like BGP). But, we’ll fool it!

Well, here’s the recipe. Take one server running ubuntu 18.04 (probably works with anything), run as follows, sit back and enjoy, then install your gitlab-runner.

rm -rf ~/.kube
sudo kubeadm reset -f
sudo kubeadm init --apiserver-advertise-address 172.16.0.3 --pod-network-cidr 192.168.0.0/16 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

until kubectl get nodes; do echo -n .; sleep 1; done; echo              

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/etcd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/rbac.yaml

curl -s https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/calico.yaml | awk '/calico-ipam/ { print "              \"type\": \"host-local\",\n"
                     print "              \"ranges\": [ [ { \"subnet\": \"192.168.0.0/16\", \"rangeStart\": \"192.168.0.10\", \"rangeEnd\": \"192.168.255.254\" } ], [ { \"subnet\": \"fc00::/64\", \"rangeStart\": \"fc00:0:0:0:0:0:0:10\", \"rangeEnd\": \"fc00:0:0:0:ffff:ffff:ffff:fffe\" } ] ]"
                     printed=1
}
{
    if (!printed) {
        print $0
    }
    printed = 0;
}' > /tmp/calico.yaml

kubectl apply -f /tmp/calico.yaml

kubectl apply -f - << EOF
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . 8.8.8.8
        cache 30
        reload
        loadbalance
    }
EOF

kubectl taint nodes --all node-role.kubernetes.io/master-

kubectl create serviceaccount -n kube-system tiller
kubectl create clusterrolebinding tiller-binding --clusterrole=cluster-admin --serviceaccount kube-system:tiller
helm init --service-account tiller                
 
Tagged with: , , , , , ,

So another day and another ‘registry out of space’. I wrote earlier about the crappy experience increasing this size (and GKE is still on 1.10 so I can’t use the 1.11+ resize mechanism!!!)

Vowing not to repeat the ‘that can’t be the right method’ that I pioneered in that post, I decided to dig a bit into why it was so large. One of the culprits was a surprise, I’ll share.

So we have a (few) Dockerfiles that do something like:

FROM node:10.8-alpine
LABEL maintainer="don@agilicus.com"

WORKDIR /usr/src/app
COPY . /usr/src/app/package.json
RUN npm install 

Seems simple enough right? And the current directory really just has about 4kB of data. How could this container be the largest one?

Digging into it… our CI system (gitlab-ci) has a ‘caching’ mechanism between stages, which creates a ‘cache.zip’ of whatever you choose.

In turn, I’m doing a ‘docker save’ in the build step so that the various security scan stages (which run in parallel) are more efficient… they just used the cached copy.

And in turn, gitlab-ci makes this dir in “.cache” in the repo directory (because its mounted that way in Kubernetes, the gitlab-runner has no other storable space).

So what happens is, the first pipeline runs, does a bunch of work, saves some stuff in the cache. Later, this runs again, and the cache increases. But… each docker build incorporates the cache into the image, which is then saved back into the cache.

So… first run… 10M container
Second run, incorporates the first 10M container (from cache) + itself = 20M
Third run… well… 40M

later this gets quite big. Huh.

This is where the ‘.dockerignore‘ file should be used!

And now I know, and now you do to.

Tagged with: , , , , ,