So I’m using Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE). Its not a big deployment (3 instances of 4VCPU/7.5GB RAM), but is now up to about $320/month.
And I’m looking at the log ingestion feature. You pay for the bytes, api calls, ingestion, retrieval. See the model here.
Feature | Price1 | Free allotment per month |
---|---|---|
Logging | $0.50/GB | First 50 GB/project |
Monitoring data | $0.2580/MB: 150–100,000MB $0.1510/MB: 100,000–250,000MB $0.0610/MB: >250,000 MB | All GCP metrics2 Non-GCP metrics: <150MB |
Monitoring API calls | $0.01/1,000 API calls | First 1 million API calls |
Trace ingestion | $0.20/million spans | First 2.5 million spans |
Trace retrieval | $0.02/million spans | First 25 million spans |
OK, so I think, its not too likely this will be a big deal for me. But then I notice, a bug in Kubernetes. And we have a lot of ‘Orphaned pod found – but volume paths are still present on disk’ messages appear. The workaround is simple, ssh to the node, rm -rf /var/lib/kubelet/<UUID>/volumes, and then it cleans up.
But, the damage is done. 6GB of ingestion in the last 5 days. So we’d be @ 36GB/mo (mostly due to this), or, $18/mo. Now, this is under the ‘free’ allotment (50GB/project/month), so as long as the rest of my logs stay under (or this bug doesn’t hit a 2nd pod), I’m ok.
But the interesting thing is, there is no particularly real-time alert. Something can go beserk and log a lot of messages, and you cannot find out for a day or so.
Leave a Reply