Changing the size of a persistent volume in Kubernetes 1.10 on GKE
In Kubernetes v1.11 you can resize persistent volume claims. Great!
Sadly, Google has not rolled this out to us great unwashed yet (its available to early-adopters or for everyone on alpha clusters), we are on v1.10.
Side note: Docker registry. One of the most commonly asked questions is: how do I delete or clean up? tl;dr: you can’t. Its the hotel california. Get over it. All those bash/php/ruby/… scripts people have written to try and work around this? Don’t spend your life trying to make them work.
Double sadly, today was the day, the container registry hit super-critical. So, once more into the breach, we can’t wait for v1.11.
So, what do I need to do? I want to:
- tar the /registry somewhere
- stop / delete the pod
- delete the pvc
- create a new larger pvc
- restart / reschedule the pod
But, I ran into an issue on #2. Its part of a larger helm deployment. I suppose I could take down the whole deployment and let it recreate later in 5. But why should I?
Instead what I did is ‘kubectl edit <deployment’ and set the replicas to 0. This caused the pods to all exit, making the pvc unclaimed. Now I can delete the pvc, create a new one, and then ‘kubectl edit …’ again and set the replicas back. Easy peasy.
# Please edit the object below. apiVersion: extensions/v1beta1 kind: Deployment metadata: ... uid: 22125323-77c8-11e8-9758-42010aa200b4 spec: progressDeadlineSeconds: 600 replicas: 0 revisionHistoryLimit: 10 selector: