Trying to figure out server
side SSH jump hosts logic. Current network schema:
[Client] <--> [Server A: hostname: a.com] <--> [Server B]
[Client] <--> [Server A: hostname: b.com] <--> [Server C]
Server A responds to both DNS records.
Possible flow:
Client opens a ssh connection with ssh
[email protected]. Server A accepts it and should automatically jump user onto Server B with ssh user2@server_b.com.
Client opens a ssh connection with ssh
[email protected]. Server A accepts it and should automatically just user onto Server C with ssh user2@server_c.com.
In other words, client should be able to connect to the target without performing any local configuration, assuming that we have a stock ssh config. The problem with ssh jumps is that user has to define hosts in local ~/.ssh/config file, which is not acceptable in my case. It needs to be a default sshd behavior.
Im aware that you can define a custom command ~/.ssh/authorized_keys on server, but i dont think there is a way to properly detect source hostname where user tries to connect.
It is possible at all ?