Search Results

Search found 4906 results on 197 pages for 'ssh tunnel'.

Page 86/197 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • specifying classpath for built-in ant tasks

    - by carneades
    I use the classpath attribute in custom Ant tasks to tell Ant where to find the external task jar, but how do I do the same for built-in tasks? In my case I'd like to make sure ant uses my copy of jsch.jar for the scp task, and not one that my already be installed on the system. Is there any way I can <scp> while guaranteeing it's using my jsch.jar?

    Read the article

  • How mature is apache sshd (MINA)?

    - by Yaneeve
    Has anyone ever used apache sshd (based on Apache MINA)? I would like to get some user input. Is it mature? Does it have (annoying) bugs? How is the API? Can useful documentation/tutorials be found? etc. Thanks all for your feedback.

    Read the article

  • Can't get SFTP to work in PHP

    - by Chris Kloberdanz
    I am writing a simple SFTP client in PHP because we have the need to programatically retrieve files via n remote servers. I am using the PECL SSH2 extension. I have run up against a road block, though. The documentation on php.net suggests that you can do this: $stream = fopen("ssh2.sftp://$sftp/path/to/file", 'r'); However, I have an ls method that attempts to something similar public function ls($dir) { $rd = "ssh2.sftp://{$this->sftp}/$dir"; $handle = opendir($rd); if (!is_resource($handle)) { throw new SFTPException("Could not open directory."); } while (false !== ($file = readdir($handle))) { if (substr($file, 0, 1) != '.'){ print $file . "\n"; } } closedir($handle); } I get the following error: PHP Warning: opendir(): Unable to open ssh2.sftp://Resource id #5/outgoing on remote host This makes perfect sense because that's what happens when you cast a resource to string. Is the documentation wrong? I tried replacing the resource with host, username, and host and that didn't work either. I know the path is correct because I can run SFTP from the command line and it works fine. Has anyone else tried to use the SSH2 extenstion with SFTP? Am I missing something obvious here? UPDATE: I setup sftp on another machine in-house and it works just fine. So, there must be something about the server I am trying to connect to that isn't working.

    Read the article

  • Java SFTP Transfer Library

    - by Claus Jørgensen
    Hi I'm looking for a dead simple Java Library to use for SFTP file transfers. I don't need any other features beyond that. I've tried Zehon's, but it's incredible naggy, and I think 8 jar files is a bit crazy for so little functionality as I require. And the library have to be free (as in free beer), and preferable Open Source (not a requirement). Thanks.

    Read the article

  • Why do techs recommend YUM installs yet repositories and providers are ages behind?

    - by JM4
    I have been reading page after page after page about the benefits of using YUM package installer and how NOBODY should built installs from source files (which again makes no sense to me) yet the repositories and source builders always package files in Tarball format, leaving a TON of work (which usually ends up going wrong) to the individual instead of formatting SRPMs for the end user. Has the world gone mad? I feel like I am taking crazy pills!

    Read the article

  • Setting up httpd.conf / mod_rewrite to redirect to codeigniter?

    - by Walker
    I'm sorry to ask this here, as I'm sure the solution is fairly easy but for the life of my I can't setup httpd.conf on my apache server to automatically load the code_igniter files. Instead I'm having to go into the folder itself localhost/trunk/etc/etc until I get index.php - which messes with some of the relative paths (our backend coder is gone for the week so I can't ask him, but he has already setup the rewrite rules on our development server).

    Read the article

  • Setting up httpd.conf / mod_rewrite to auto-load codeigniter?

    - by Walker
    I'm sorry to ask this here, as I'm sure the solution is fairly easy but for the life of my I can't setup httpd.conf on my apache server to automatically load the code_igniter files. Instead I'm having to go into the folder itself localhost/trunk/etc/etc until I get index.php - which messes with some of the relative paths (our backend coder is gone for the week so I can't ask him, but he has already setup the rewrite rules on our development server).

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • Linux server, locating files containing nothing but 4 specific lines.

    - by Denis
    Hi, I'm dealing with a compromised website, in which hackers injected an htaccess instruction to redirect traffic. I can easily locate .htaccess files that contain the forwarding hack, BUT in cases where the directory already contained an htaccess file, they appended the dangerous instructions, so I cannot just deleted any htaccess file or could harm the site by letting formerly pw-protected directories wide open, or urlrewrite instructions (WordPress) be deleted, etc. I could not find the way to locate files that only contain those 4 lines of redirect hack, could you shed some light ? So far, using find . -type f -exec grep -q targetpiratedomain {} \; -exec echo rm {} \; Thanks !

    Read the article

  • How to do a SOAP wsdl web services call from the command line

    - by Marina
    I need to make a SOAP webservice call to https://sandbox.mediamind.com/Eyeblaster.MediaMind.API/V2/AuthenticationService.svc?wsdl and to use the operation ClientLogin while passing through the parameters: ApplicationKey, Password, and UserName. The response is UserSecurityToken. They are all strings. Here is the link fully explaining what I am trying to do: https://sandbox.mediamind.com/Eyeblaster.MediaMind.API.Doc/?v=3 How can I do this on the command line? (Windows and/or Linux would be helpful) Thanks!

    Read the article

  • I have a slight confusion with setting up Mercurial on my webserver...

    - by littlejim84
    I'm starting to use Mercurial on my web server (in this case MediaTemple's Grid). I've used SVN previously, though I'm not an expert of version control systems. I'm just needing a little help with clearing up some confusion with getting it set up optimally. I have a 'data' folder which is outside the web server root and that the browser cannot access. It was recommended to me before to have my Mercurial repositories setup here, then I would clone from here locally on my computer. I would also have a 'domains' folder that is basically the web server root and inside there is my actual domains where my websites are actually served to the browser - these would need to be updated from the 'data' repositories too. But with this in mind, after setting it up, it seems inefficient... I'm cloning to my local (that makes sense), adding, committing, pushing. That's fine... But then I'm then updating in my data repository folder and then updating in my domains folder to actually update my websites. Surely, I don't actually need this 'data' folder for repositories? Wouldn't my actual live 'domains' folders be the main repositories themselves? So I'm cloning locally and updating from these? Please help me clear some confusions with all this (if you can).

    Read the article

  • Can connect to EC2 as ubuntu user but not as the user i created

    - by Sid
    I created a new ebs backed EC2-instance and the necessary key-pair. Now I am able to connect to the instance as ubuntu user. Once i did that I created another user and added it to the sudoers list but I am unable to connect to the instance as the new user I created. I get the following error. I am using the same key to connect with the new user i created. Can somebody help me. Am I missing something here? Permission denied (publickey)"

    Read the article

  • How to unload all the plugins from vim and change VIMRUNTIME ?

    - by phocke
    Hello my problem is this: I have an account at my hosting providers server and I can't install my own copy of vim. So the only personalization I can make is editing .vimrc in my account, but it won't suffice What I'd Like to do is: on startup I'd like to unload all the plugins and loaded stuff, and tell vim to use other folder as its' runtime. Any idea how to aproach it?

    Read the article

  • What is common between environments within a shell terminal session?

    - by Matt1776
    I have a custom shell script that runs each time a user logs in or identity is assumed, its been placed in /etc/profile.d and performs some basic env variable operations. Recently I added some code so that if screen is running it will reattach it without needing me to type anything. There are some problems however. If I log-in as root, and su - to another user, the code runs a second time. Is there a variable I can set when the code runs the first time that will prevent a second run of the code? I thought to write something to the disk but then I dont want to prevent the code from running if I begin a new terminal session. Here is the code in question. It first attempts to reattach - if unsuccessful because its already attached (as it might be on an interruped session) it will 'take' the session back. screen -r if [ -z "$STY" ]; then exec screen -dR fi Ultimately this bug prevents me from substituting user to another user because as soon as I do so, it grabs the screen session and puts me right back where I started. Pretty frustrating

    Read the article

  • Execute a command using php under ssh2 in php

    - by Mervyn
    Using Mint terminal my script connects using ssh2_connect and ssh2_auth-password. When am logged in successfully I want to run a command which will give me the hardware cpu. Is there a way I can use to exec the command in my script then show the results. I have used system and exec for pinging. if i was in the terminal i do the login. then type "get hardware cpu" in the terminal it would look like this: Test~ $ get hardware cpu

    Read the article

  • VIM Flashing Issue

    - by user1302110
    I'm SSH'ing in from my mac OSX (10.6.8) to a school server running centOS5 and when I attempt to use VIM, it won't stop flashing inside the mac terminal. Any idea's on how to fix this? Keep in my mind I do not have the authority to modify any /etc files or /bin files on the server, although I believe I can locally on my user. Also I would love to see anyone's really cool .vimrc config file they want to share.

    Read the article

  • xcodebuild + iPhone fail under ssh with Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport'

    - by mamcx
    I'm triying to compile my iPhone app from ssh. This is for my build tool that run in another machine. The base sdk is iPhone Device 3.0. The error is : "Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport'" However, executing from the regular terminal run ok. Also directly from xcode. This is the log: [trtrrtrtr@mac-pro-de-trtrr-trtr ~/mamcx/projects/JhonSell/iPhone]$ xcodebuild -target BestSeller -configuration Debug=== BUILDING NATIVE TARGET Three20 OF PROJECT Three20 WITH CONFIGURATION Debug === Checking Dependencies... No architectures to compile for (ONLY_ACTIVE_ARCH=YES, active arch=armv6, VALID_ARCHS=i386). 2010-04-27 16:16:50.369 xcodebuild[1168:4b1b] Error loading /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: dlopen(/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice, 265): no suitable image found. Did find: /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: GC capability mismatch 2010-04-27 16:16:50.371 xcodebuild[1168:4b1b] Exception caught: Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport' 2010-04-27 16:16:50.373 xcodebuild[1168:4b1b] Error loading /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: dlopen(/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice, 265): no suitable image found. Did find: /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: GC capability mismatch 2010-04-27 16:16:50.373 xcodebuild[1168:4b1b] Exception caught: Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport' ** BUILD FAILED **

    Read the article

  • How to send packets via a pptp vpn tunnel?

    - by Phill
    I'm trying to send certain port traffic through my ppp0 interface it's a pptp vpn tunnel, First, I'm using a wireless usb interface, I connect up to my access point, then I initiate my vpn, there is a connection but I do not channel all connexions through that, nor do I want to, so, say I want to channel all port 80 packets through my vpn (interface dev ppp0). I first run: iptables -t mangle -A OUTPUT -p tcp --dport 80 -j MARK --set-mark 0xa to mark the correct packets then I add a table named vpn_table, I then add ip route add default dev ppp0 table vpn_table when I do that traffic begins to dribble through the ppp0, but no pages load. I supose I must have caused some sort of coflict, or the route I'm adding in vpn_table isn't quite right. I'm not sure, I think I'm marking the packets correctly but I can't be sure of that either. UPDATE: I think i've got part of the issue solved: running tcpdump -i ppp0 showed me that indeed there was outgoing requests via ppp0, now, there is never a response, and pages do not load with using that interface..i'm still missing something.

    Read the article

  • github like workflow on private server over ssh

    - by Jesse
    I have an server (available via ssh) on the internet that my friend and I use for working on projects together. We have started using git for source control. Our setup currently is as follows: Friend created repository on server with git init named project.friend.git I cloned project.friend.git on server to project.jesse.git I then cloned project.jesse.git on server to my local machine using git clone jesse@server:/git_repos/project.jesse.git I work on my local machine and commit to the local machine. When I want to push my changes to the project.jesse.git on server I use git push origin master. My friend is working on project.friend.git. When I want to get his changes I do pull jesse@server:/git_repos/project.friend.git. Everything seems to be working fine, however, I am now getting the following error when I do git push origin master: localpc:project.jesse jesse$ git push origin master Counting objects: 100, done. Delta compression using up to 2 threads. Compressing objects: 100% (76/76), done. Writing objects: 100% (76/76), 15.98 KiB, done. Total 76 (delta 50), reused 0 (delta 0) warning: updating the current branch warning: Updating the currently checked out branch may cause confusion, warning: as the index and work tree do not reflect changes that are in HEAD. warning: As a result, you may see the changes you just pushed into it warning: reverted when you run 'git diff' over there, and you may want warning: to run 'git reset --hard' before starting to work to recover. warning: warning: You can set 'receive.denyCurrentBranch' configuration variable to warning: 'refuse' in the remote repository to forbid pushing into its warning: current branch. warning: To allow pushing into the current branch, you can set it to 'ignore'; warning: but this is not recommended unless you arranged to update its work warning: tree to match what you pushed in some other way. warning: warning: To squelch this message, you can set it to 'warn'. warning: warning: Note that the default will change in a future version of git warning: to refuse updating the current branch unless you have the warning: configuration variable set to either 'ignore' or 'warn'. To jesse@server:/git_repos/project.jesse.git c455cb7..e9ec677 master -> master Is this warning anything I need to be worried about? Like I said, everything seems to be working. My friend is able to pull my changes in from my branch. I have the clone on the server so he can access it since he does not have access to my local machine. Is there something that could be done better? Thanks!

    Read the article

  • IPvsadm not equally balancing on wlc scheduler

    - by davidsmalley
    For some reason, ipvsadm does not seem to be equally balancing the connections between my real servers when using the wlc or lc schedulers. One real server gets absolutely hammered with requests while the others receive relatively few connections. My ldirectord.cf file looks like this: quiescent = yes autoreload = yes checktimeout = 10 checkinterval = 10 # *.site.com http virtual = 111.111.111.111:http real = 10.10.10.1:http ipip 10 real = 10.10.10.2:http ipip 10 real = 10.10.10.3:http ipip 10 real = 10.10.10.4:http ipip 10 real = 10.10.10.5:http ipip 10 scheduler = lc protocol = tcp service = http checktype = negotiate request = "/lb" receive = "Up and running" virtualhost = "site.com" fallback = 127.0.0.1:http The weird thing that I think may be causing the problem (but I'm really not sure) is that ipvsadm doesn't seem to be tracking active connections properly, they all appear as inactive connections IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 111.111.111.111:http lc -> 10.10.10.1:http Tunnel 10 0 10 -> 10.10.10.2:http Tunnel 10 0 18 -> 10.10.10.3:http Tunnel 10 0 3 -> 10.10.10.4:http Tunnel 10 0 10 -> 10.10.10.5:http Tunnel 10 0 5 If I do ipvsadm -Lnc then I see lots of connections but only ever in ESTABLISHED & FIN_WAIT states. I was using ldirectord previously on a Gentoo based load balancer and the activeconn used to be accurate, since moving to Ubuntu 10.4 LTS something seems to be different. # ipvsadm -v ipvsadm v1.25 2008/5/15 (compiled with popt and IPVS v1.2.1) So, is ipvsadm not tracking active connections properly and thus making load balancing work incorrectly and if so, how do I get it to work properly again? Edit: It gets weirder, if I cat /proc/net/ip_vs then it looks like the correct activeconns are there IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP B86A9732:0050 rr -> 0AB42453:0050 Tunnel 10 1 24 -> 0AB4321D:0050 Tunnel 10 0 23 -> 0AB426B2:0050 Tunnel 10 2 25 -> 0AB4244C:0050 Tunnel 10 2 22 -> 0AB42024:0050 Tunnel 10 2 23

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >