We've all been there. Working to a deadline trying to get our e-commerce site going to make sure cats don't get cold feet for the winter.

And because its a microservices cloud jwt polyglot kubernetes istio [insert jargon here] world, well, its not as easy to debug. So many moving pieces. Remember when I said the cloud is built for width, not downward-scalability?

You are polishing the demo in Azure AKS, its looking good. When suddenly a wild set of flows occur. You whip out your your trusty tools that show you North-South, East-West, South-North traffic, which conveniently have IP transparency enabled so there is no NAT affects from the 3-levels of address translation in Kubernetes.

You get this chart below. Hmm, I'm seeing about 25k flows @ ~1k/s/new coming in, from Whois tells me this is Microsoft. Oh no! The attack is coming from inside the house!

We look into Kibana, we see all the flows helpfully logged. They each look like this. We then look at the MS Developer page here. Huh. The correlation is, all of our ContainerPort are being **hammered** by this. But we are not responding (because out network policy is to avoid this!). So it tries again. The ports its contacting, some of them are not HTTP, so I don't know how they would know what service. The article suggests "Bring-your-own IP Virtual Network", which is not us.

We conclude that, well, yes, Azure has a very high interest in pinging all services with ContainerPort enabled. And that yes, an automatically-responding firewall might consider this an attack. And no, nothing bad happens when counter-measures are deployed. And, nothing better occurs if you whitelist it (as the article suggests). This is likely the 'health check' of the LoadBalancer, in my case:

istio-ingressgateway       LoadBalancer   80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:30380/TCP,8060:30376/TCP,853:32481/TCP,15030:32411/TCP,15031:31158/TCP   1h

Perhaps we do need to respond after all? Nothing bad happens if we don't, but might it declare this ingressgateway down?

Note: its also the 'upstream DNS' of your kube-dns, so be careful about a flat-out block. YMMV. Void where prohibited. This is not legal advice.



Tagged with: , , , ,

There was a time you just ran 'crontab -e' to make this happen. But, progress, are you still on my lawn? Lets discuss how to solve the specific issue of 'my database fills up my disk' in a Cloud Native way.

So the situation. I'm using ElasticSearch and fluent-bit for some logging in a Kubernetes cluster. This is for test and demo purposes, so I don't have a huge elastic cluster. And, if you know something about elastic and logging, you know that the typical way of pruning is to delete the index for older days (this doesn't delete the data, just the index). You also know that it cowardly drops to read-only if the disk gets to 80% full, and that its not all that simple to fix.

Well, a quick hack later and we have code that fixes this problem (below and in github). But, why would I want to run this every day manually like a neanderthal? Lets examine the Kubernetes CronJob as a means of going to the next step.

First, well, we need to convert from code (1 file, ~1kB) to a container (a pretend operating system with extra cruft, size ~90MB). To do that, we write a Dockerfile. Great, now we want to build it in a CI platform, right? Enter the CI descriptor. Now we have the issue of cleaning up the container/artefact repository, but, lets punt that! Now we get to the heart of the matter, the cron descriptor. What this says is every day @ 6:10 pm UTC, create a new pod, with the container we just built, and run it with a given argument (my elastic cluster). Since the pod runs inside my Kubernetes cluster it uses an internal name (.local).

Progress. It involves more typing!

apiVersion: batch/v1beta1
kind: CronJob
  name: elastic-prune
  schedule: "10 18 * * *"
            - name: regcred
            - name: elastic-prune
              image: cr.agilicus.com/utilities/elastic-prune
                - -e
                - http://elasticsearch.logging.svc.cluster.local:9200
          restartPolicy: OnFailure

Below is the code. Its meant to be quick and dirty, so ... In a nutshell, fetch the list of indices, assume they are named logstash-YY-mm-dd. Parse the date, subtract from now, if greater than ndays, delete it. Then make all remaining indices be non-readonly (in case we went read-write). Boom.

Now no more elastic overflows for me. Demo on!

#!/usr/bin/env python

import requests, json
import datetime
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('-e', '--elastic', help='Elastic URL (e.g. https://elastic.mysite.org)', default = '', required = True)
parser.add_argument('-d', '--days', help='Age in days to delete from (default 3)', type=int, default = 3, required = False)
args = parser.parse_args()


to_be_deleted = []
today = datetime.datetime.now()

r = requests.get('%s/_stats/store' % args.elastic)
for index in r.json()['indices']:
        index_date = datetime.datetime.strptime(index, "logstash-%Y.%m.%d")
        age = (today - index_date).days
        print("%20s %s [age=%u]" % (index, r.json()['indices'][index]['primaries']['store']['size_in_bytes'], age))
        if (age > args.days):
    except ValueError:
        # e.g. .kibana index has no date

for index in to_be_deleted:
    print("Delete index: <<%s>>" % index)
    r = requests.delete('%s/%s' % (args.elastic, index))

if len(to_be_deleted):
    r = requests.put('%s/_all/_settings' % args.elastic, json={"index.blocks.read_only_allow_delete": None})

r = requests.get('%s/_stats/store' % args.elastic)
for index in r.json()['indices']:
    r = requests.put('%s/%s/_settings' % (args.elastic, index), json={"index.blocks.read_only_allow_delete": None})

Tagged with: , , , , ,

Like most cloud folks you are probably using Kibana + Elasticsearch as part of your log management solution. But did you know with a little regex-fu you can make that logging more interesting? See the kibana expansion in the image, the URI, host, service, etc are all expanded for your reporting pleasure.

First, lets install our ingress with some annotations. I've made the interesting bits red.

helm install stable/nginx-ingress --name ingress \
  --set controller.service.externalTrafficPolicy=Local \
  --set rbac.create=true \
  --set controller.podAnnotations.fluentbit\\.io/parser=k8s-nginx-ingress

If your ingress is already running you can use this instead:

kubectl annotate pods --overwrite ingress-nginx-####   fluentbit.io/parser=k8s-nginx-ingress

Now, lets install fluent-bit (to feed the Elasticsearch). We will add a custom-regex for the nginx-ingress log format. Its not the same as the nginx default so we can't use the built-in.

    repository: fluent/fluent-bit
    tag: 0.14.1
  pullPolicy: IfNotPresent
  enabled: true
    port: 2020
    type: ClusterIP
trackOffsets: false
  type: es
    host: fluentd
    port: 24284
    host: elasticsearch
    port: 9200
    index: kubernetes_cluster
    type: flb_type
    logstash_prefix: logstash
    time_key: "@timestamp"
    tls: "off"
    tls_verify: "on"
    tls_ca: ""
    tls_debug: 1

  enabled: true
    - name: k8s-nginx-ingress
      regex:  '^(?<host>[^ ]*) - \[(?<real_ip>)[^ ]*\] - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<last>[^$]*)'

Once this is done you'll have something like below in your logs. See how all the fields are expanded to their own rather than being stuck in log: ?

  "_index": "logstash-2018.09.13",
  "_type": "flb_type",
  "_id": "s_0x1GUB6XzNVUp1wNV6",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2018-09-13T18:29:14.897Z",
    "log": " - [] - - [13/Sep/2018:18:29:14 +0000] \"GET / HTTP/1.1\" 200 9056 \"-\" \"curl/7.58.0\" 75 0.000 [default-front-end-80] 9056 0.000 200 a134ebded3504000d63646b647e54585\n",
    "stream": "stdout",
    "time": "2018-09-13T18:29:14.897196588Z",
    "host": "",
    "real_ip": "",
    "user": "-",
    "method": "GET",
    "path": "/",
    "code": "200",
    "size": "9056",
    "referer": "-",
    "agent": "curl/7.58.0",
    "request_length": "75",
    "request_time": "0.000",
    "proxy_upstream_name": "default-front-end-80",
    "upstream_addr": "",
    "upstream_response_length": "9056",
    "upstream_response_time": "0.000",
    "upstream_status": "200",
    "last": "a134ebded3504000d63646b647e54585",
    "kubernetes": {
      "pod_name": "ingress-nginx-ingress-controller-6577665f8c-wqg76",
      "namespace_name": "default",
      "pod_id": "0ea2b2c8-b5e8-11e8-bc8c-d237edbf1eb2",
      "labels": {
        "app": "nginx-ingress",
        "component": "controller",
        "pod-template-hash": "2133221947",
        "release": "ingress"
      "annotations": {
        "fluentbit.io/parser": "k8s-nginx-ingress"
      "host": "kube-spawn-flannel-worker-913bw7",
      "container_name": "nginx-ingress-controller",
      "docker_id": "40daa91b8c89a52e44ac1458c90967dab6d8a0e43c46605b0acbf8432f2d9f13"
  "fields": {
    "@timestamp": [
    "time": [
  "highlight": {
    "kubernetes.labels.release.keyword": [
    "kubernetes.labels.app": [
    "kubernetes.annotations.fluentbit.io/parser": [
    "kubernetes.container_name": [
    "kubernetes.pod_name": [
    "kubernetes.labels.release": [
  "sort": [
Tagged with: , , , , ,

So you have a K8S cluster. Its got a lovely Ingress controller courtesy of helm install stable/nginx-ingress. You've spent the last hours getting fluent-bit + elastic + kibana going (the EFK stack). Now you are confident, you slide the user-story to completed and tell all and sundry "well at least when you're crappy code gets hacked, my logging will let us audit who did it".

Shortly afterwards l33t hackerz come in and steal all your infos. And your logs are empty. What happened? As you sit on the unemployment line pondering this, it hits you. Your regex. You parsed the nginx ingress controller logs with this beauty:

^(?<host>[^ ]*) - \[(?<real_ip>)[^ ]*\] - (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*) "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" (?<request_length>[^ ]*) (?<request_time>[^ ]*) \[(?<proxy_upstream_name>[^ ]*)\] (?<upstream_addr>[^ ]*) (?<upstream_response_length>[^ ]*) (?<upstream_response_time>[^ ]*) (?<upstream_status>[^ ]*) (?<last>[^$]*)

And why not? The format is documented. But, you and little bobby tables both forgot the same thing. Your hackers were smart, they put a " in the name of the user-agent.

So nginx dutifully logged "hacker"agent-name", and, your regex didn't hit that of course, so no message was logged.

Red team only needs to get it right once. Blue team needs to be ever vigilant.

Tagged with: , , , , ,

IPv4. Its rare when its public, and annoying when its private. So we try and conserve this precious resource. One of the things that makes it complex is Kubernetes namespaces. A Kubernetes Ingress controller is not namespace aware (you can't have a shared Ingress that has services in multiple namespaces). Or can you?

What if I told you you could install a single Ingress (and cert-manager etc) and then have a service in each namespace served by it? Would you rejoice over saving a few $100/mo on public IP rental in 'the cloud'?

Lets dig in. Imagine we have 3 namespaces with interesting services. 'foo', 'bar' and 'kube-system' (which has our dashboard).

Lets assume we have 'kibana' running in kube-system. We want to expose this to the 'public internet'. Likely we would also use oauth2 proxy here to sign in, but I'll ignore auth for now.  We are going to use a new service (synthetic) which lives in the default namespace alongside our Ingress controller as 'glue'. Its kind of like a DNS CNAME.

First we install a single global ingress. Lets use helm:

helm install stable/nginx-ingress --name ingress --set controller.service.externalTrafficPolicy=Local --set rbac.create=true

Wait for the LoadBalancer to get a public IP, register it in DNS. You can either use a wildcard (*.something.MYDOMAIN.CA) or register each service, your call. All will use the same IP.

(To avoid complicating this, I'll show the cert-manager etc at the end, but its optional, we just need the next step with the Ingress + Service)

Once we have installed the below yaml we can now browse https://kibana.MYDOMAIN.CA/ and we are there. Repeat for the other services. Done! We have a single public IP

apiVersion: extensions/v1beta1
kind: Ingress
  name: kibana
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: 'true'
    nginx.ingress.kubernetes.io/tls-acme: 'true'
  - hosts:
    - kibana.MYDOMAIN.CA
    secretName: tls-secret-kibana
  - host: kibana.MYDOMAIN.CA
      - path: /
          serviceName: kibana
          servicePort: 5601
kind: Service
apiVersion: v1
  name: kibana
  namespace: default
  type: ExternalName
  externalName: kibana.kube-system.svc.cluster.local
  - port: 5601

Now, to complete this and show with SSL certificates. You don't need this, above is all you need to expose the service, but why not do it on TLS at the same time? Its free!.

helm install stable/cert-manager --namespace kube-system --set ingressShim.defaultIssuerName=letsencrypt --set ingressShim.defaultIssuerKind=ClusterIssuer --name cert

cat << EOF | kubectl -n kube-system apply -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
  name: letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
    email: don@agilicus.com
      name: letsencrypt
    http01: {}
Tagged with: , , , , ,