Search Results

Search found 4462 results on 179 pages for 'ssh'.

Page 100/179 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • ec2_bundle_vol fails with error LoadError

    - by Koran
    Hi, I am a newbie in amazon ec2 setup. I have now setup a machine to my taste - and I now want to bundle it. I am running the following command from the launched instance - root@domU-21-34-67-26-ED-Z4:~# ec2-bundle-vol -r i386 -d /mnt \ -p ACT-VOL -u 8940-1355-4155 -k /tmp/pk-key.pem \ -c /tmp/cert.pem -s 10240 \ -e /mnt,/root/.ssh,/home/ubuntu/.ssh ruby: No such file or directory -- /home/ubuntu/ec2tools/ec2-api-tools-1.3-46266/lib/ec2/amitools/bundlevol.rb (LoadError) The ruby version is 1.8.7. I searched internet and installed libruby1.8-extras etc too, but to no avail. I also tried running it from site_ruby (/usr/local/lib/site_ruby) - but no use. I tried installing 1.8.6 version of ruby, but was unable to find a way to do so too. Any help would be much appreciated. Thanks, K

    Read the article

  • Git access on Heroku deployment and others: connection refused

    - by Toby Hede
    I have suddenly run into an issue using git. I created a new app, went to push to Heroku and now see: ssh: connect to host heroku.com port 22: Connection refused My other previously working Heroku apps no longer work, receiving the same error. Other Heroku commands work (create, info, db:push). I also see the error when accessing Git on my unfuddle accounts. I can SSH to other services, so it doesn't look like it's my machine. Any ideas?

    Read the article

  • Restrict folder sharing over cygwin sshd

    - by mtanish
    I recently installed a SSH server on my Windows 7 PC and created a separate user account for this. When i logged in using SSH, i could access all the windows directories. /cygdrive/c /cygdrive/d /cygdrive/e How do i prevent this user from accessing all the win directories other than its home directory under cygwin /home/chuck/ ? Preferably i do not want the user to even view /cygdrive when the user types "mount". Is there a easy way to do this? I want to later allow remote users to log on to this machine and avoid messing up other things.I know i can setup a separate machine but this is a plan for later.

    Read the article

  • How do you share your git repository with other developers?

    - by semi
    I have a central git repository that everyone pushes to for testing and integration, but it only is pushed to when features are 'ready'. While in the middle of a big task, developers frequently have many commits that stay on their harddrives. Sometimes in the middle of these projects I'd like to either see what another developer is doing, or show him how I've done something. I'd like to be able to tell another developer to just "pull my working copy" The only way I can think of is having everyone run ssh on their development machines and adding accounts or ssh keys for everyone, but this is a huge privacy and permissions nightmare, and seems like a lot of work to maintain. Should we just be pushing to that central repository in these cases? Should we be pushing after every local commit?

    Read the article

  • Update website with a single command (git push) instead of FTP drag and dropping

    - by Wolfr
    Situation: I have a local copy of a website I have a server that I have SSH access to What do I want to do? Commit locally until I'm happy with my code Make branches locally Have one master branch that is the one that should be pushed to the server Update the website using a single command (git push origin master) If I set up a git repo locally using git init, and then push to a folder on the server, it doesn't work. When I FTP to the server to check the files, they're actually there. When I SSH into the server and do git status, it's not clean, even though it should be since I just pushed to the server. Steps I'm doing: Make a new folder on my computer (mkdir folder_x) Go into that folder (cd folder_x) Set up a new git repository there (git init) (git repository sets up successfully) Push the repository to the server using git push origin master (where origin is set up as user:[email protected])

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

  • Safe executing shell scripts; escaping vars before execution.

    - by Kirzilla
    Hello, Let's imagine that we have a simple php script that should get ssh_host, ssh_username, ssh_port from $_GET array and try to connect using this parameters to SSH. $port = escapeshellcmd($_GET['ssh_port']); $host = escapeshellcmd($_GET['ssh_host']); $username = escapeshellcmd($_GET['ssh_username']); $answer = shell_exec("ssh -p " . $port . " " . $user . "@" . $host); Is escapeshellcmd() enough or I need something more tricky? Or maybe I should use escapeshellarg() in this example? Thank you.

    Read the article

  • Python: Converting a tuple to a string with 'err'

    - by skylarking
    Given this : import os import subprocess def check_server(): cl = subprocess.Popen(["nmap","10.7.1.71"], stdout=subprocess.PIPE) result = cl.communicate() print result check_server() check_server() returns this tuple: ('\nStarting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:26 EDT\nInteresting ports on 10.7.1.71:\nNot shown: 1711 closed ports\nPORT STATE SERVICE\n21/tcp open ftp\n22/tcp open ssh\n80/tcp open http\n\nNmap done: 1 IP address (1 host up) scanned in 0.293 seconds\n', None) Changing the second line in the method to result, err = cl.communicate() results in check_server() returning : Starting Nmap 4.53 ( http://insecure.org ) at 2010-04-07 07:27 EDT Interesting ports on 10.7.1.71: Not shown: 1711 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.319 seconds Looks to be the case that the tuple is converted to a string, and the \n's are being stripped.... but how? What is 'err' and what exactly is it doing?

    Read the article

  • PVM terminates after Adding Host

    - by Tyug
    On Ubuntu 9.10 using PVM 3.4.5-12 (the PVM package when you use apt-get) The program terminates after adding a host. laptop> pvm pvm> add bowtie-slave add bowtie-slave terminated laptop> Current Configuration only $PVM_RSH = bin/usr/ssh I can ssh perfectly fine into the slave without a password, and run commands on it. Any ideas? Thanks in advance! Here are the sample logs: Laptop log [t80040000] 02/11 10:23:32 laptop (127.0.1.1:xxxxx) LINUX 3.4.5 [t80040000] 02/11 10:23:32 ready Thu Feb 11 10:23:32 2010 [t80040000] 02/11 10:23:32 netoutput() sendto: errno=22 [t80040000] 02/11 10:23:32 em=0x2c24f0 [t80040000] 02/11 10:23:32 [49/à][6e/à][76/à][61/à][6c/à][69/à][64/à][20/à][61/à][72/à] [t80040000] 02/11 10:23:32 netoutput() sendto: Invalid argument [t80040000] 02/11 10:23:32 pvmbailout(0) bowtie-log [t80080000] 02/11 10:23:25 bowtie-slave (xxx.x.x.xxx:xxxxx) LINUX64 3.4.5 [t80080000] 02/11 10:23:25 ready Thu Feb 11 10:23:25 2010 [t80080000] 02/11 10:28:26 work() run = STARTUP, timed out waiting for master [t80080000] 02/11 10:28:26 pvmbailout(0)

    Read the article

  • Get Process ID of Program Started with C# Process.Start

    - by ThaKidd
    Thanks in advance of all of your help! I am currently developing a program in C# 2010 that launches PLink (Putty) to create a SSH Tunnel. I am trying to make the program able to keep track of each tunnel that is open so a user may terminate those instances that are no longer needed. I am currently using System.Diagnostics.Process.Start to run PLink (currently Putty being used). I need to determine the PID of each plink program when it is launched so a user can terminate it at will. Question is how to do that and am I using the right .Net name space or is there something better? Code snippet: private void btnSSHTest_Click(object sender, EventArgs e) { String puttyConString; puttyConString = "-ssh -P " + cboSSHVersion.SelectedText + " -" + txtSSHPort.Text + " -pw " + txtSSHPassword.Text + " " + txtSSHUsername.Text + "@" + txtSSHHostname.Text; Process.Start("C:\\Program Files (x86)\\Putty\\putty.exe", puttyConString); }

    Read the article

  • How can I make an expect script prompt for a password?

    - by MiniQuark
    I have an expect script that connects to a few routers through ssh. All these routers have the same password (I know, it's wrong), and the script needs to know that password in order to be able to connect to the routers. Currently, the password is passed to my script as an argument on the command line, but this means that there's a trace of that password in my .bash_history file as well as in the running processes. So instead I would like the user to be prompted for a password, if possible silently. Do you know whether or not it's possible to prompt the user for a password with expect? Thank you. Edit: if I was connecting to servers instead of routers, I would probably use ssh keys instead of passwords. But the routers I'm using just support passwords.

    Read the article

  • OCIError (ruby on rails)

    - by swingfuture
    I am using rails freeze 1.2.3 to run a rails app. Because the app is on a remote machine, I used ssh tunnel (ssh -l -L) to show the app on my screen. When I ran it, it correctly prompted the login page, after I put in the info, I got this error: OCIError in ServiceController Error while trying to retrieve text for error ORA-12154 I have tried the same app on a different machine w/o using freeze (because that machine has rails version 1.2.3 while current one has 2.0.2). Is that where the error comes from? Thanks.

    Read the article

  • how to mysqldump remote db from local machine

    - by Mauritz Hansen
    Hi folks, I need to do a mysqldump of a database on a remote server, but the server does not have mysqldump installed. I would like to use the mysqldump on my machine to connect to the remote database and do the dump on my machine. I have tried to create an ssh tunnel and then do the dump, but this does not seem to work. I tried: ssh -f -L3310:remote.server:3306 [email protected] -N The tunnel is created with success. If I do telnet localhost 3310 I get some blurb which shows the correct server mysql version. However, doing the following seems to try to connect locally mysqldump -P 3310 -h localhost -u mysql_user -p database_name table_name Can someone assist me? Thanks, Mauritz

    Read the article

  • PHP CLI Application on Debian: why can't I output a line break?

    - by Steffen Müller
    Hello! I have a really puzzling problem: I am writing a PHP CLI application running on a debian server. I am connected to the server via SSH, just the normal way. Everything runs as usual. Except the following: echo "My CLI fun\n\n"; echo "Is this."; Outputs on the SSH terminal, when executing the PHP script: My CLI funIs this. I am really puzzled as I have never had such a problem. The bash behaves normal in all other aspects. I already tried to output chr(10) and such, same problem. Does anybody have a clue?

    Read the article

  • Custom script in .screenrc

    - by benoror
    Hi. I made a script that spawns a remote shell or runs a local shell whether it's on the current machine or not: #!/bin/bash # By: benoror <[email protected]> # # spawns a remote shell or runs a local shell whether it's on the current machine or not # $1 = hostname if [ "$(hostname)" == "$1" ]; then bash else ssh "$1.local" fi For example, if I'm on server1: ./spawnshell.sh server1 -> runs bash ./spawnshell.sh server2 -> ssh to server2.local I want that script to run automatically in separate tabs in GNU Screen, but I can't make it run, my .screenrc: ... screen -t "@server1" 1 exec /home/benoror/scripts/spawnshell.sh server1 screen -t "@server2" 2 exec /home/benoror/scripts/spawnshell.sh server2 ... But it doesn't works, I've tried without 'exec', with -X option and a lot more. Any ideas ?

    Read the article

  • Hudson jobs won't call javac?

    - by Dissonant
    Hi, I have just set up Hudson on my server. For some reason, my build will not call javac to compile my builds...? I have set the path to the JDK in the Manage Hudson area, and it seems to recognise it (doesn't give me a warning). Is there something else I'm supposed to do? Here's a sample console output of one of my jobs (note how javac isn't called at all): Started by user admin Checking out svn+ssh://myhost.com/Project1 A /src/Program.java A build.xml U At revision 119 no change for svn+ssh://myhost.com/Project1 since the previous build Finished: SUCCESS

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • Working with git from 2 laptops with no bare repo

    - by matiit
    I've started project in my first laptop. git init, and start working. Tomorrow i'm going to vacations. I want to take with me my smaller laptop. And work with project from time to time. I cloned repository via ssh from bigger laptop (git clone ssh://adress) And when i will back, what is the best way to push changes from smaller laptop to the bigger one? There is no bare repo in bigger laptop. And i want to work with that repo on the bigger laptop later, so i have to do this clear.

    Read the article

  • Socket left in TIME_WAIT after file transfer via netcat

    - by com
    Using Copying by NetCat I am trying to copy files throught network by NetCat. From console it work pretty well. First I run listening netcat on the destination machine and after I run sending on source machine. The problem is it's doen't work from script from the source machine: ssh -f user@$desthost 'nc -l 1234 | tar xvf - /dev/null &' #listening on destination host tar cv /tmp/file | nc $desthost 1234 #sending to destination host I saw that after running port 1234 is still was open and status of the socket was TIME_WAIT. If you know what's the problem, please, help me out. And by the way, after copying how can I validate that the content is identical? Thanks! Addendum: I found one very strange thing, the same implementation with screen on destination work works, but not stable, sometimes it doesn't copy a file. ssh user@$desthost screen -dm -S test 'nc -l 1234 | tar xvf - ' #listening on destination host Maybe there is an issue with timeout?

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • m and s keys do not work over vnc connection to ubuntu server

    - by Don
    I'm new at setting a lot of this up, so bear with me. I installed Ubuntu 10.4 server on a 64 bit machine. Then I added vnc so I could manage it while it's racked. I start the server, SSH to it, and run vncserver :1 At this point, all keys work fine. Next I exit out of the SSH session and fire up my client vnc app. I connect via the IP :1, enter my password, and everything seems to be fine. Now when I enter a terminal (through the vnc connection) I cannot type lowercase "s" or "m" (upper case works). I've tried on two different pc's running the vnc client, but it's the same. I also installed the latest updates from Ubuntu as of today. Thanks for any help.

    Read the article

  • Splitting string into array upon token

    - by Gnutt
    I'm writing a script to perform an offsite rsync backup, and whenever the rsyncline recieves some output it goes into a single variable. I then want to split that variable into an array upon the ^M token, so that I can send them to two different logger-sessions (so I get them on seperate lines in the log). My current line to perform the rsync result=rsync --del -az -e "ssh -i $cert" $source $destination 2>&1 Result in the log, when the server is unavailable ssh: connect to host offsite port 22: Connection timed out^M rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: unexplained error (code 255) at io.c(601) [sender=3.0.7]

    Read the article

  • Capistrano deploy:migrate Could not find rake-0.9.2.2 in any of the sources

    - by Kyle
    My Capistrano deploy:migrate task is set to run a simple rake db:migrate command, as follows: env PATH=/home/user/.gems/bin sh -c 'cd /home/user/app/releases/20121003140503 && rake RAILS_ENV=production db:migrate' When I run this task during an ssh session manually it completes successfully. However when I run from my local development box, I receive the following error: ** [out :: app] Could not find rake-0.9.2.2 in any of the sources I am able to locate my rake gem by typing which rake via ssh (/home/user/.gems/bin/rake) and rake --version gives me "rake, version 0.9.2.2," so I don't understand why this command fails via Capistrano?

    Read the article

  • Correct Path for Git Remote Add from Amazon EC2 Instance to OSX Client Machine

    - by filmnut
    I'm trying to do a git remote add from a repository that sits on a remote Amazon AMI back to a cloned copy of the SAME repository that is sitting on my local OSX machine. I'm confused about what file path to use. I assume it's something like: git remote add my_clone <OSX_User_Name>@<OSX_HOST_NAME>:<PATH_TO_CLONED_REPO> I obviously know what my <OSX_User_Name> is, and I can figure out my <PATH_TO_CLONED_REPO>, but I have no idea how to determine a <OSX_HOST_NAME> that would actually work. Can I just put in my external IP address, followed by my machine's internal IP address? (Note that I'm working behind a router.) Is ssh:// the correct protocol? Do I need to set up ssh access from the Amazon EC2 machine to the local OSX machine?

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >