Helm, Kubernetes, and the immutable configMap… a design pattern

Lets say you have got some program that doesn't reload when its config changes. You introduce it to Kubernetes via helm. You use a configMap. All is good. Later you do a helm upgrade and... nothing happens. You are sad. You roll up your sleeves, write some code using inotify(), and the program restarts as soon as a change happens to the config. You are happy. Until that day you make a small typo in the config, call helm upgrade, and watch the continuous suicide of your Pods. Now you are sad again. If only there were a better way.

I present to you the better way. And its simple. It solves both problems at once.

Conceptually its pretty simple. You make the 'name' of the configMap have its contents-hash in it. Now, when it changes, the Deployment is different, so it will start to replace the Pods. It will ripple through, as the new Pods start, they must come online before the old ones will die. Thus if you have an error, it will not be a problem. Boom!

So here's a subset of an example:. You're welcome.

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ include "tranquility.fullname" . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}
  labels:
    app.kubernetes.io/name: {{ include "tranquility.name" . }}
    helm.sh/chart: {{ include "tranquility.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
data:
  server.json: {{ tpl .Values.config . | quote }}
  application.ini: {{ tpl .Values.application . | quote }}
  logback.xml: {{ tpl .Values.logging . | quote }}

 ...
apiVersion: apps/v1beta2
kind: Deployment
 ...
          - mountPath: /tranquility/conf
            name: {{ include "tranquility.fullname" . }}-config
      volumes:
        - name: {{ include "tranquility.fullname" . }}-config
          configMap:
            name: {{ include "tranquility.fullname" . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}
2 comments on “Helm, Kubernetes, and the immutable configMap… a design pattern
  1. db naseem says:

    name: {{ include “tranquility.fullname” . }}-{{ tpl .Values.config . | sha256sum | trunc 8 }}-{{ tpl .Values.application . | sha256sum | trunc 8 }}-{{ tpl .Values.logging . | sha256sum | trunc 8 }}

    3 hashed things here, would just one of these suffice?

  2. db db says:

    No. I ran into this funny issue: If I compute the hash of the whole map, it recurses in tiller. But i have 3 segments, i need to know if any of them changes.

    This is working quite well, now if I typo the config and call upgrade, nothing bad happens.
    If one of the previously running pods restarts at the exact instance, nothing bad happens (it still has its old configmap).
    this ripples out through the deployment automatically.

Leave a Reply

Your email address will not be published. Required fields are marked *

*