Several years ago I presented @ OpenStack Vancouver summit. At that time I was running our Openstack setup with a much smaller pool of public IPv4 than number of instances. I had come up with a solution that allowed the end users to ssh directly to their machines, as if they had a public IP, without seeing a jump box. They could run:
'ssh email@example.com' and it would ssh to that instance in their tenant, magically, from anywhere on the public Internet. This allowed tools like scp to work, etc. It used the proper SSH keys system of Openstack Nova, so it was all automatic.
I didn't polish or finish it for everyone to use, but it got slightly opensource here.
In a nutshell the way I did this was a small bit of magic in their ~/.ssh/config file (so that the syntax 'ssh tenant.instance.vpn.domain' passed tenant.instance as a flag. There was a jump key (hidden to them) and then it used ssh proxy, taking that tenant.instance, finding the private network namespace, doing an nsenter, and then proxy to port 22 on the right IP. It worked well and enabled us to scale to many hundreds of users with thousands of instances from a /25 of public space.
Today I discovered that Azure has the same issue. See the instructions they give. In a nutshell you are dancing back and forth between two windows, copying your private key into a container you run with kubectl run, etc. Ugh. That container ends up with your private key, no passphrase, no agent. Double ugh. And, things like scp don't work. There must be a better way! (see above).
As a history lesson here's me on stage in 2015. The section where I talked about the auto-ssh magic got the most post-show feedback, many admin without enough IPv4.