You know the joke about the crappy horror movie, they trace the IP, its 127.0.0.1, the killer was in the house (localhost)?
True story, this just happened to me. So settle down and listen to a tale of NAT, Proxy, Kubernetes, and Fail2Ban (AKA Rack Attack in ruby land).
You see, we run a modest set of infrastructure in Kubernetes on GKE. Its about what you would expect, a LoadBalancer (which owns the external IP) feeds an Ingress controller which in turn has a set of routing rules based on vhost. And one of those endpoints is Gitlab (now you see why I mentioned Ruby above). And one of the things you should know about the cloud is… NAT is common, and multiple NAT is usually present.
So here, the chain:
[LoadBalancer]->[Ingress]->[Gitlab nginx]->[Gitlab unicorn]
has 3 NAT steps. Don’t believe me? Lets count
- The Load Balancer does a NAT.
- The Ingress is a proxy server, so inherently NAT.
- The Gitlab nginx (sidecar) is a proxy server, so inherently NAT.
So, what IP will Gitlab unicorn see? Well, that of the gitlab nginx. If REMOTE_IP is used, maybe that of the Ingress.
So, when some $!# tries to hack my gitlab, what will happen? I get blocked!
# redis-cli 127.0.0.1:6379> keys *rack*attack* 1) "cache:gitlab:rack::attack:allow2ban:ban:10.16.10.17"
‘403 forbidden’. Courtesy of this feature ‘rack attack’ which is a type of fail to ban. Now, I’m not dis’ing fail to ban, its a powerful technique. But, well, you gotta know who you are banning.