Search Results

Search found 4462 results on 179 pages for 'ssh'.

Page 47/179 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How can I set my computer up for remote SSH access?

    - by FarmBoy
    I have a Linux machine that I can access by SSH from my laptop when I am inside my house, but when I am using another Internet connection, I can't connect. What do I need to do? I have Verizon DSL Internet and an ActionTec modem, if that matters. If there are other relevant facts I'm omitting, please let me know and I'll improve my question.

    Read the article

  • One incorrect SSH login attempt locks me out for an hour...

    - by Legend
    I've never observed this problem neither did any of my colleagues trying to SSH into the same system. If I try logging into my server using a wrong username and then press ^C to terminate or exhaust my password attempts, I am locked out for at least an hour. Is there something I can do on my end to fix this problem?

    Read the article

  • On a linux server how do you use multiple terminals over a single ssh connection?

    - by epochwolf
    I often find myself opening several ssh connections in order to view several log files at a time with tail -f. This isn't a problem when I'm at home because I use public key encryption for password-less login. However, I will often use computer at my university to do this so I don't have the option of using my private key. It gets annoying to enter my password 4 or 5 times to get several terminal windows. How can I get multiple terminals over a single connection?

    Read the article

  • Basic iptables for a webserver: SSL Tomcat, postgres, ssh and that's it.

    - by Paperino
    This is probably as basic as it gets but I'm a developer and really have no experience with iptbles. The only connections I need opened are: eth0 (outward facing) ssh ping SSL to tomcat (forward port 443 to 8443) eth1(local subnet) connection to postgres server Everything else should be blocked. My current attempts seem to be leaving all other ports open. I wonder what gives. Thanks serverfault!

    Read the article

  • Lost SSH Key, is it possible to log in via some other method now?

    - by Daniel Hai
    So I happened to do the cardinal sin of administration. I changed the openssh settings, and accidently closed that terminal before I could test it, and of course, now I'm completely locked out. I was using password-less private key authentication. Am I completely screwed out of my server? This is a VPS system -- is there a way where I could have someone at the datacenter log in locally and redo the ssh settings?

    Read the article

  • Why is my SSH session timing out in less than a minute?

    - by John Smith
    Within a minute of connecting to my remote Linux server through SSH, my session times out and I cannot contact the server until a few seconds have passed. Meanwhile, I'm connected to other servers without interruption. This is only happening when I establish connection from an hotel wireless AP. When I connect from my phone's Internet, the problem does not occur. Does anyone know what might be causing these unusual timeouts?

    Read the article

  • cPanel and SSH using Tunnlier or PuTTy; How?

    - by takpar
    Hi, I have a cPanel reseller account. I am trying to connect using SSH. Using Tunnlier or PuTTy i get "Shell Access is not enabled on your account.". I have enabled it for my account; generated Public/Private keys, Authorized the public key. I don't know what else should I do?

    Read the article

  • duplicity fail: not promping for password: "Running 'sftp user@host' failed"

    - by Thr4wn
    I have two linode VPS accounts and I want to back up one onto the other (the reasons are mainly for fun and to practice server administration.) the short version Duplicity isn't even asking for my password, but immediately says "invalid SSH password" (but I can ssh into the other server). why? the long version When I run duplicity /home/me scp://[email protected]//root/backup I get Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) Invalid SSH password Running 'sftp [email protected]' failed (attempt #2) Invalid SSH password Running 'sftp [email protected]' failed (attempt #3) And it says Invalid SSH password immediately with no opportunity for me to actually type the password. When I type duplicity full -v9 --num-retries 4 /home/me scp://[email protected]//root/backup I get Main action: full Running 'sftp [email protected]' (attempt #1) State = sftp, Before = 'Connecting to 97.107.129.67... [email protected]'s' State = sftp, Before = '' Invalid SSH password Running 'sftp [email protected]' failed (attempt #1) I can ssh into [email protected] fine, and in fact have the ip in known_hosts before I tried any of this. serer 1 (from which I'm running the duplicity command) is Linode's default Ubuntu 8 setup with only a handful of programs installed via apt-get. server 2 (represented by x.x.x.x) is literally only Linode's default Ubuntu 8 setup I previously tried using SystemImager -- would that have changed settings in a destructive way? (I have removed and rebooted since then) Isn't Duplicity supposed to prompt for password? Am I using it wrong? are there common mistakes/dependencies I need to know about? Is there any way that x.x.x.x could be setup that could make this not work (I used Linode's default Ubuntu 8 setup and barely )?

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • Open an X application going through many hoops (SSH, vpn etc)

    - by ??O?????
    The players: my home computer, running Linux with an X server running. (Call it HOME.) a remote site, to which I can connect over the internet using a VPN. (SITE) a Linux computer at the remote site, to which I can connect with ssh -X and nicely have X clients displaying on my local server. (MIDDLE) a very old Irix machine (an Onyx) at the remote site, which has no SSH server (therefore I can't ssh -X to it), only an ssh client. (ONYX) Purpose I need to run an X11 application on the ONYX machine, and see the GUI on HOME. I think I stumble upon xauth issues. So far The current situation is: ? HOME connects to SITE ? A vncserver starts on MIDDLE:7 ? vncviewer on HOME connects to vncserver on MIDDLE ? ONYX starts a forwarding ssh session to MIDDLE: ssh -TfN -L 6007:127.0.0.1:6007 MIDDLE ? DISPLAY=localhost:7 xclient on ONYX fails with Xlib: connection to "127.0.0.1:7.0" refused by server I do know that the forwarding (6007:127.0.0.1:6007) succeeds. A previous attempt was: ? HOME connects to SITE ? HOME connects to MIDDLE: ssh -X MIDDLE (xclock displays on HOME, DISPLAY is 127.0.0.1:10) ? ONYX starts an SSH tunnel to MIDDLE: ssh -TfN -L 6010:127.0.0.1:6010 MIDDLE ? DISPLAY=127.0.0.1:10 xclient fails with X connection to 127.0.0.1:10.0 broken (explicit kill or server shutdown). while an error pops up in the MIDDLE session: X11 connection rejected because of wrong authentication. Despair How can I achieve my purpose?

    Read the article

  • Secure Copy File from remote server via scp and os module in Python

    - by user1063572
    I'm pretty new to Python and programming. I'm trying to copy a file between two computers via a python script. However the code os.system("ssh " + hostname + " scp " + filepath + " " + user + "@" + localhost + ":" cwd) won't work. I think it needs a password, as descriped in How do I copy a file to a remote server in python using scp or ssh?. I didn't get any error logs, the file just won't show in my current working directory. However every other command with os.system("ssh " + hostname + "command") or os.popen("ssh " + hostname + "command") does work. - command = e.g. ls When I try ssh hostname scp file user@local:directory in the commandline it works without entering a password. I tried to combine os.popen commands with getpass and pxssh module to establish a ssh connection to the remote server and use it to send commands directly (I only tested it for an easy command): import pxssh import getpass ssh = pxssh.pxssh() ssh.force_password = True hostname = raw_input("Hostname: ") user = raw_input("Username: ") password = getpass.getpass("Password: ") ssh.login(hostname, user, password) test = os.popen("hostname") print test But I'm not able to put commands through to the remote server (print test shows, that hostname = local and not the remote server), however I'm sure, the conection is established. I thought it would be easier to establish a connection than always use "ssh " + hostname in the bash commands. I also tried some of the workarounds in How do I copy a file to a remote server in python using scp or ssh?, but I must admit due to lack of expirience I didn't get them to work. Thanks a lot for helping me.

    Read the article

  • Android - save/restore state of custom class

    - by user1209216
    I have some class for ssh support - it uses jsch internally. I use this class on main activity, this way: public class MainActivity extends Activity { SshSupport ssh = new SshSupport(); ..... @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); //Handle events for ssh ssh.eventHandler = new ISshEvents() { @Override public void SshCommandExecuted(SshCommandsEnum commandType, String result) { } //other overrides here } //Ssh operations on gui item click @Override public void onItemClick(AdapterView<?> arg0, View v, int position, long arg3) { if (ssh.IsConnected() == false) { try { ssh.ConnectAsync(/*parameters*/); } catch (Exception e) { e.printStackTrace(); } } try { ssh.ExecuteCommandAsync(SshCommandsEnum.values()[position]); } catch (Exception e) { e.printStackTrace(); } } }); } It works very well. My application connects to ssh, performs all needed operation in background thread and results are reported to gui, via events as shown above. But nothing works after user change device orientation. It's clear for me - activity is re-created and all state is lost. Unfortunately, my SshSupport class object is lost as well. It's pretty easy to store gui state for dynamically changed/added objects (using put/get serializable etc methods). But I have no idea how to prevent my ssh object, ssh connected session being lost. Since my class is not serializable, I can't save it to bundle. Also, even if I make my SshSupport class serializable, jsch objects it uses still are not serializable. So what is the best way to solve this?

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 server?

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • How to prevent ssh git push to set file ownership?

    - by e-satis
    I have a remote bare git repository on an Ubuntu server, where the file are owned by the user my_project and the group my_project, with permissions set accordingly. All commiters are themself in the group my_project. When somebody commit then push from my Ubuntu laptop with the user my_user to the server via SSH, some files in the remote repository are created (updated?) so they now belong to the user and group my_user. Of course, when somebody else want to commit, he is now unable to do so because he doesn't have write permissions. I could set permission to 777 but it's not the best option. Is there any way I can solve this problem while keeping restricted write permissions.

    Read the article

  • How Ubuntu cloud version enforces the "no root login" over ssh ?

    - by Maxim Veksler
    Hello, I'm looking to tweak ubuntu cloud version default setup where is denies root login. Attempting to connect to such machine yields: maxim@maxim-desktop:~/workspace/integration/deployengine$ ssh [email protected] The authenticity of host 'ec2-204-236-252-95.compute-1.amazonaws.com (204.236.252.95)' can't be established. RSA key fingerprint is 3f:96:f4:b3:b9:4b:4f:21:5f:00:38:2a:bb:41:19:1a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ec2-204-236-252-95.compute-1.amazonaws.com' (RSA) to the list of known hosts. Please login as the ubuntu user rather than root user. Connection to ec2-204-236-252-95.compute-1.amazonaws.com closed. I would like to know where this is setup and how I can change the printed message? Thank you, Maxim.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >