Helm, Kubernetes, and the immutable configMap… a design pattern

Lets say you have got some program that doesn’t reload when its config changes. You introduce it to Kubernetes via helm. You use a configMap. All is good. Later you do a helm upgrade and… nothing happens. You are sad. You roll up your sleeves, write some code using inotify(), and the program restarts as soon as a change happens to the config. You are happy. Until that day you make a small typo in the config, call helm upgrade, and watch the continuous suicide of your Pods. Now you are sad again. If only there were a better way.

I present to you the better way. And its simple. It solves both problems at once.

Conceptually its pretty simple. You make the ‘name’ of the configMap have its contents-hash in it. Now, when it changes, the Deployment is different, so it will start to replace the Pods. It will ripple through, as the new Pods start, they must come online before the old ones will die. Thus if you have an error, it will not be a problem. Boom!

So here’s a subset of an example:. You’re welcome.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "tranquility.fullname" . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}
  labels:
    app.kubernetes.io/name: {{ include "tranquility.name" . }}
    helm.sh/chart: {{ include "tranquility.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
  server.json: {{ tpl .Values.config . | quote }}
  application.ini: {{ tpl .Values.application . | quote }}
  logback.xml: {{ tpl .Values.logging . | quote }}

 ...
apiVersion: apps/v1beta2
kind: Deployment
 ...
          - mountPath: /tranquility/conf
            name: {{ include "tranquility.fullname" . }}-config
      volumes:
        - name: {{ include "tranquility.fullname" . }}-config
          configMap:
            name: {{ include "tranquility.fullname" . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}

Posted

in

by

Tags:

Comments

4 Responses to “Helm, Kubernetes, and the immutable configMap… a design pattern”

  1. naseem

    name: {{ include “tranquility.fullname” . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}

    3 hashed things here, would just one of these suffice?

  2. db

    No. I ran into this funny issue: If I compute the hash of the whole map, it recurses in tiller. But i have 3 segments, i need to know if any of them changes.

    This is working quite well, now if I typo the config and call upgrade, nothing bad happens.
    If one of the previously running pods restarts at the exact instance, nothing bad happens (it still has its old configmap).
    this ripples out through the deployment automatically.

  3. gentle

    I loνe your blog.. very nice coⅼors & theme.
    Did уou design this website yourself or did you hire ѕomeone
    to do it for you? Plz answer back as I’m looking to create mу own blog and ԝould like to know
    where u got thіs from. thanks a lot

    1. db

      its responsive theme on wordpress.

Leave a Reply

Your email address will not be published. Required fields are marked *