We've got a couple of clusters running on AWS (HAProxy/Solr, PGPool/PostgreSQL) and we've setup scripts to allow new slave instances to be auto-included into the clusters by updating their IPs to config files held on S3, then SSHing to the master instance to kick them to download the revised config and restart the service. It's all working nicely, but in testing we're using our master pem for SSH which means it needs to be stored on an instance. Not good.
I want a non-root user that can use an AWS keypair who will have sudo access to run the download-config-and-restart scripts, but nothing else. rbash seems to be the way to go, but I understand this can be insecure unless setup correctly.
So what security holes are there in this approach:
New AWS keypair created for user.pem (not
really called 'user')
New user on instances: user
Public key for user is in
~user/.ssh/authorized_keys (taken by
creating new instance with user.pem,
and copying it from
/root/.ssh/authorized_keys)
Private key for user is in
~user/.ssh/user.pem
'user' has login shell of
/home/user/bin/rbash
~user/bin/ contains symbolic links to
/bin/rbash and /usr/bin/sudo
/etc/sudoers has entry "user
ALL=(root) NOPASSWD:
~user/.bashrc sets PATH to
/home/user/bin/ only
~user/.inputrc has 'set
disable-completion on' to prevent
double tabbing from 'sudo /' to find
paths.
~user/ -R is owned by root with
read-only access to user, except for
~user/.ssh which has write access for
user (for writing known_hosts), and
~user/bin/* which are +x
Inter-instance communication uses
'ssh -o StrictHostKeyChecking=no -i
~user/.ssh/user.pem user@ sudo '
Any thoughts would be welcome.
Mark...