Search Results

Search found 14544 results on 582 pages for 'ssh config'.

Page 356/582 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Open a file with su/sudo inside Emacs

    - by Chris Conway
    Suppose I want to open a file in an existing Emacs session using su or sudo, without dropping down to a shell and doing sudoedit or sudo emacs. One way to do this is (require 'tramp) C-c C-f /sudo::/path/to/file but this requires an expensive round-trip through SSH. Is there a more direct way? [EDIT] @JBB is right. I want to be able to invoke su/sudo to save as well as open. It would be OK (but not ideal) to re-authorize when saving. What I'm looking for is variations of find-file and save-buffer that can be "piped" through su/sudo.

    Read the article

  • Restart nginx without sudo?

    - by tesmar
    So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there. I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?

    Read the article

  • Apache security for multi-user development web server.

    - by mrmartinblue
    I've been searching and reading through documents all morning and understand that I need to use some combination of chown and probably 'jailing' to securely give programmers access to directories on my centos webserver. Here's the situation: I have an apache web server that has any number of virtual sites located in /var/www/site1 /var/www/site2 etc.. I have different developers that need full access both ssh and vsFTP to only the site they are working on. What is the best way to create and maintain security in this scenario. My thought would be to create a new user for each coder, jail that user to the website directory they are allowed to work in, add their user to a group and set the webroot's owner to that group. Any thoughts? Good, bad, ugly? Thanks!

    Read the article

  • Composer does not find dependencies of vcs repository

    - by Michael Freund
    i've got a strange problem ... project-a is my main project. project-b is my library, checked in to subversion composer.json of project-b { "name": "fragger/baseclasses", "version" : "0.0.1-dev", "description": "Baseclasses and Interfaces", "require": { "silex/silex": "1.0.x-dev", "3rd-party/smarty": "3.*", "swiftmailer/swiftmailer": "4.2-dev" }, "autoload": { "psr-0": { "baseclasses": "src/" } } } and composer.json of project-b { "repositories" : [ { "type": "vcs", "url" : "svn+ssh://....." } ], "require": { "fragger/baseclasses": ">=0.0.1-dev" } } output of install command php composer.phar install Loading composer repositories with package information Installing dependencies Your requirements could not be resolved to an installable set of packages. Problem 1 - Installation request for fragger/baseclasses >=0.0.1-dev -> satisfiable by fragger/baseclasses dev-trunk. - fragger/baseclasses dev-trunk requires silex/silex 1.0.x-dev -> no matching package found. But a composer install in project a alone, works fine

    Read the article

  • Executing commands on a Unix box from ASP .NET

    - by StefanE
    I'm in process to create a few utilities for my team to make life a bit easier working with our Unix boxes(most of them Solaris based). For example I'm creating a ASP .NET page to display the output of TOP. Also plan to be able to restart processes with the KILL -15 command. Now I wonder if there is any nice modules out the do the work for me or am I better off just going ahead with my own SSH communication? It would of course make sense building the app on the unix box directly but I'm not able to do this.

    Read the article

  • Running Ruby app on Apache

    - by TandemAdam
    I have been learning Ruby lately, and I want to upload a test web application to my server. But I can't figure out how to get it to run on my shared hosting. My Hosting Details Shared Hosting with JustHost (see here for list of features) OS: Linux Apache: 2.2.11 cPanel: 11.25.0-STABLE No SSH access. Can install Ruby Gems. Can't install Apache modules. Can "Manage Ruby on Rails Applications" through cPanel. Mongrel gem is installed. I built the following simple HelloWorld Ruby Rack app using Sinatra: #!/usr/bin/ruby ruby require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end I just can can't figure out how to "start" the application. Do I need to tell Mongrel (or maybe Apache) that the application exists somehow? How do I start this app running? I am happy to provide more info if needed.

    Read the article

  • How to read log4j output to a web page?

    - by Ran
    I have a web page, used for admin purposes, which runs a task (image fetching from a remote site). In order to be able to debug the task using the browser only, no ssh etc, I'd like to be able to read all log output from the executing thread and spit it out to the web page. The task boils down to: Changing log level for current thread at the beginning of the call and restore when the call is done. Reading all log output by current thread and storing it in a string. So in pseudocode my execute() method would look like this: (I'm using struts2) public String execute() throws Exception { turnLoggingLevelToDebugOnlyForThisThread() ... do stuff... restoreLoggingLevelForThisThread() String logs = readAllLogsByThisThread(); } Can this be done with log4j? I'm using tomcat, struts2, log4j and slf4j.

    Read the article

  • Django: Gracefully restart nginx + fastcgi sites to reflect code changes?

    - by Bartek
    Hi, Common situation: I have a client on my server who may update some of the code in his python project. He can ssh into his shell and pull from his repository and all is fine -- but the code is stored in memory (as far as I know) so I need to actually kill the fastcgi process and restart it to have the code change. I know I can gracefully restart fcgi but I don't want to have to manually do this. I want my client to update the code, and within 5 minutes or whatever, to have the new code running under the fcgi process. Thanks

    Read the article

  • Installing XAMPP in Amazon EC2

    - by Woho87
    Hi! Can someone explain to me(not a rocket scientist) all the steps for installing XAMPP for a EC2 instance running Linux? And yes I have look the entire web and found nothing, except this "Deploying a lamp stack" and this "Starting Amazon EC2 with Mac OS X". I found the latter one more useful, but as a said I'm not a rocket scientist. I got stuck in to the latter link where I should edit the file .bash_profile with a text editor. I tried the "vi" in SSH, but why can't I open it on textEditor included in all Macs? Is there any easier way to setting up a XAMPP server for a linux server in EC2?

    Read the article

  • In Vim, what is the best way to select, delete, or comment out large portions of multi-screen text?

    - by Edward Tanguay
    Selecting a large amount of text that extends over many screens in an IDE like Eclipse is fairly easy since you can use the mouse, but what is the best way to e.g. select and delete multiscreen blocks of text or write e.g. three large methods out to another file and then delete them for testing purposes in Vim when using it via putty/ssh where you cannot use the mouse? I can easily yank-to-the-end-of-line or yank-to-the-end-of-code-block but if the text extends over many screens, or has lots of blank lines in it, I feel like my hands are tied in Vim. Any solutions? And a related question: is there a way to somehow select 40 lines, and then comment them all out (with "#" or "//"), as is common in most IDEs?

    Read the article

  • Verizon Fivespot firewall exceptions

    - by Patrick
    I have a Verizon Fivespot Wi-Fi router and am having issues connecting to the computer that uses it to get on the internet. I am able to connect to the Fivespot admin pages remotely and I am able to connect to the internet from the computer behind the Fivespot. I've tried asking this on superuser but have gotten nothing, I figure this is pertinent to programmers working on remote computers as well. There are two sections pertinent to this issue, Port Filtering And, Port Forwarding I've tried each individually and both together but cannot access anything through the router except for the admin page. I am trying to connect through SSH on Port 22 to an Ubuntu 10.04 box over wifi. I have called Verizon Tech Support but they were unhelpful, the person essentially read what it says on each screen without any elaboration. Any help is greatly appreciated!

    Read the article

  • using Eclipse to develop for embedded Linux on a Windows host

    - by Travis
    I got a question of using Eclipse to develop for embedded Linux on a Windows host Here are now I have and where I am. 1. a Windows host that have the latest Eclipse + CDT (c/c++ development tools) installed 2. a Ubuntu host (ssh + samba installed) that contains sources and toolschain to build the project. (the windows and ubuntu hosts are sitting within one network segment (In LAN).) 3. I can use the following commands to build this project under Ubuntu. # chroot dummyroot # cd /home/project/Build # sh Build date +%Y%m%d%H%M%S 4. I am now trying to create an eclipse C++ project to achieve the goad of the step 3, but I have been stuck here for a while. any ideas of how it can be done?

    Read the article

  • Is there a way to let NetBeans work with Amazon-ec2 disk space?

    - by khelll
    Hello, I'm sick of using vim to develop on some far Amazon-ec2 machine. I'm wondering if there is any way to Use NetBeans on my laptop to develop on and run the code on that machine. Basically I want a way to let NetBeans operate on an external disk space that I connect to using SSH, In my case I'm using Mac OS X 10.6.3 locally and the external disk space is located on some Amazon-ec2 machine. Any ideas? Or solution for such cases when a developer needs to code on some external machine and use a good IDE? Cheers,

    Read the article

  • Mono 2.10.5 Runtime error on Ubuntu 11.10

    - by johnluetke
    I've install mono-runtime via apt in order to run my Mono console application on Ubuntu via SSH. However, when I run the command mono myapp.exe, It exits, with no message, and my program does nothing. If I throw the -v switch to Mono, such as mono -v myapp.exe, I get about 10k lines of output (as expected, -v is verbose), with the first few lines being: converting method System.OutOfMemoryException:.ctor (string) Method System.OutOfMemoryException:.ctor (string) emitted at 0xb7052c28 to 0xb7052c4b (code length 35) [myapp.exe] converting method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) Method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) emitted at 0xb7052c68 to 0xb7052cf6 (code length 142) [myapp.exe] converting method System.SystemException:.ctor (string) I read this as the runtime throwing an OutOfMemory exception, but the machine is under no intense load, has plenty of available RAM, and is running nothing other that system processes. I've removed and reinstalled Mono countless times, and have even run the executable on other machines perfectly fine. Am I missing something completely obvious here?

    Read the article

  • APC not recommended for production?

    - by solomongaby
    I have started having problems with my VPS in the way that it would faill to serve the pages on all the websites. It just showed a blank page, or offered to download the php file ( luckily the code was not in the download file :) ). The server was still running, but this seemed to be a problem with PHP, since i could login into WHM. If i did a apache restart, the sites would work again. After some talks with the server support they told me this is a problem with the APC extension witch they considered to be old and not recommended for production servers. So they removed it for now, to see if the same kind of fails would continue to appear. I haven't read anywhere that APC could have some problems or that its not always recommended to use, quite the contrary ... everywhere people are saying to always use it. The APC extension was installed ssh and is the latest version. Edit: They also dont recomend MemCache and say that a more reliable extension would be eAccelerator

    Read the article

  • Branching and remote heads in Mercurial

    - by hekevintran
    I created a new branch using this command: hg branch new_branch After the first commit to the new branch, the default branch becomes inactive. If this is pushed the central repository will have only one head which belongs to the new branch. When my colleague pushes his commits on the default branch, he will get this error: pushing to ssh://... searching for changes abort: push creates new remote heads! (did you forget to merge? use push -f to force) Is there anything bad about forcing the push? Why are remote heads bad? How do you work remotely on separate branches and push to one repository?

    Read the article

  • Pushing app to heroku error

    - by Ryan Max
    Hello, I am getting the following error when I try to push my app to heroku. I saw a similar thread on here, but the issues seemed related to OSX. I am running windows 7 $ git push heroku master Counting objects: 1652, done. Delta compression using up to 4 threads. fatal: object 91f5d3ee9e2edcd42e961ed2eb254d5181cbc734 inconsistent object lengt h (476 vs 8985) error: pack-objects died with strange error error: failed to push some refs to '[email protected]:floating-stone-94.git I'm not sure what this means. I can't find any consistent answers on the internet. I tried re-creating my ssh public key but still the same.

    Read the article

  • What is the best way to secure a shared git repo for a small distributed team ?

    - by ashy_32bit
    We have a Scala project and we decided to use git. The problem is we are a very small distributed team and we want nobody outside of the team to have even the read only access to our git server (which has a valid IP and is world-accessible in the IP level). I have heard the git-daemon has no authentication mechanism by itself and you should somehow integrate it with ssh or something. What is the best (and easiest) way to make the git server respond only to authorized users ? Or perhaps git-daemon is not for this task ? I may add that I am looking for a simple and straightforward approach. I don't want to compete with github ;-)

    Read the article

  • "Project description file" error in git?

    - by Paul Wicks
    I've a small project that I want to share with a few others on a machine that we all have access to. I created a bare copy of the local repo with git clone --bare --no-hardlinks path/to/.git/ repoToShare.git I then moved repoToShare.git to the server. I can check it out with the following: git clone ssh://user@address/opt/gitroot/repoToShare.git/ test I can then see everything in the local repo and make commits against that. When I try to push changes back to the remote server I get the following error. *** Project description file hasn't been set error: hooks/update exited with error code 1 error: hook declined to update refs/heads/master Any ideas?

    Read the article

  • Python subprocess.Popen hangs in 'for l in p.stdout' until p terminates, why?

    - by Albert
    I have that code: #!/usr/bin/python -u localport = 9876 import sys, re, os from subprocess import * tun = Popen(["./newtunnel", "22", str(localport)], stdout=PIPE, stderr=STDOUT) print "** Started tunnel, waiting to be ready ..." for l in tun.stdout: sys.stdout.write(l) if re.search("Waiting for connection", l): print "** Ready for SSH !" break The "./newtunnel" will not exit, it will constantly output more and more data to stdout. However, that code will not give any output and just keeps waiting in the tun.stdout. When I kill the newtunnel process externally, it flushes all the data to tun.stdout. So it seems that I can't get any data from the tun.stdout while it is still running. Why is that? How can I get the information? Note that the default bufsize for Popen is 0 (unbuffered). I can also specify bufsize=0 but that doesn't change anything.

    Read the article

  • SVN update returns nothing, while it should

    - by user325483
    Hi everyone, First some background information; I've set up my SVN repository on my local server at home using VisualSVN Server. Using SSH on (or via php/shell script), i am able to check out a folder from this repository to the webserver, all goes well. Also updates and other svn commands execute normaly and return their messages. Now comes the problem, and I'm struggling with this for a few days now. Before I execute the checkout command *svn co http://server_home/folder*, I want to make sure no conflicts are going to happen, so I execute *svn status [folder_on_webserver]*. But this doesn't return the result as expected, it returns nothing. When I execute * svn status --show-updates [folder_on_webserver]* it returns the following: * newfolder * 13 anotherfolder * 13 yetanotherfolder * 13 . Status against revision: 16 As you can see it misses the svn codes (A,U,D). Does somebody knows why the svn update command and the svn codes doesnt work?

    Read the article

  • Remote desktop connection to Raspberry Pi without specifying a port

    - by Max Methot
    I have a Raspberry Pi running Raspbian Wheezy connected in "Site A", where the network is managed by a third party company and where all ports are closed the the Internet (for security reasons). So, there is no way for me to do any port forwarding to VNC, nor SSH or anything else. That means I just can't access it in any way other than locally, on-site. However, I need to connect to that device on the X Desktop session (graphical interface) to do some maintenance, and I am located in let's say "Site B", which is nearly 300 miles away from site A. I know you can do such tasks on Windows or x86 Linux computers with TeamViewer (we use it for our other hardware in the same location and it works like a charm), but since the Raspberry Pi is based on an ARM architecture, it isn't supported by TeamViewer yet. If anyone has ever achieved this, I would be glad to hear how to do it! Thanks!

    Read the article

  • Delayed Jobs is not finding Records and failing..

    - by Trip
    In my app, delayed jobs isn't running automatically on my server anymore. It used to.. When I manually ssh in, and perform rake jobs:work I return this : * Starting job worker host:ip-(censored) pid:21458 * [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob * [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts This returns roughly 20 times over for what I think is several jobs. Then I get a few of these: [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob And then finally one of these : 12 jobs processed at 73.6807 j/s, 12 failed ... Any ideas what I should be mulling over? Thanks so much!

    Read the article

  • Sharing a fabfile across multiple projects

    - by Matthew Rankin
    Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself: copying the fabfile.py from one Django project to another and modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.). One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by: being able to pip install the common tasks defined in the fabfile.py and having a fab_config file containing the host configuration information for each project and overriding any tasks as needed Any recommendations on how to increase the DRYness of my Fabric workflow?

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >