Search Results

Search found 14544 results on 582 pages for 'ssh config'.

Page 235/582 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Database Continuous Integration 101

    We talk a lot about continuous integration here on the Atlassian Dev Tools blog, and many readers are bonafide CI gurus. Now that you are integrating your application code, test code, config files and deploy scripts, are you ready to take it to the next level? An increasing number of engineering shops are starting to bring the continuous integration discipline into their database development. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • I can't autostart xfce4 power manger lubuntu 13.10

    - by user203766
    I just upgraded my 64 bit lubuntu to 13.10 on my netbook today. After the upgrade, I simply can't autostart the xfce4 power manager. I tried to add from the desktop session settings, I tried to copy the power manager.desktop file to ~/.config/autostart folder. Everything looks fine, than I logout, log back in, and the darn power manager, just won't start automatically. It only starts when I double click the icon, or if i start it from the terminal. Help me please

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • WebService DIME Bridge

    The DIME Bridge transferring a web service response (any serializable object) in the binary format across the Internet. It's a full transparent loosely coupled solution between the web service and its consumer - just injecting the bridge in their config files.

    Read the article

  • Running CopySourceAsHtml Add-in under Visual Studio 2010

    - by Marko Apfel
    Until now CopySourceAsHtml only supports Visual  Studio 2008 out of the box. But it is no problem to pimp up the config-file for supporting Visual Studio 2010. Copy all three files to "%userprofile%\Documents\Visual Studio 2010\Addins" Open CopySourceAsHtml.AddIn in a text editor and change both lines with <Version>9.0</Version> to <Version>10.0</Version> Run Visual Studio 2010 and CopySourceAsHtml works fine

    Read the article

  • Set Idle Time in Ubuntu 12.04 Server

    - by ssanj
    I recently installed Ubuntu 12.04 Server and am looking for away for get the server to suspend after an idle time. When using the desktop version I could use the Gnome powersaving tool to specify the idle time. As I have no GUI on the server is there a way to set the server idle time via the commandline/config file? I will send the server a wake-on-lan packet to wake it up, if it is suspended and I need to use it.

    Read the article

  • xbindkeys slow on Ubuntu 13.10

    - by 3l4ng
    I am using Ubuntu 13.10 64 bit on an Intel 15 with 4GB of RAM. I used xbindkeys for custom keyboard shortcuts in Ubuntu 13.04 because it was easy to configure with the GUI xbindkeys-config. Now I have setup the same on Ubuntu 13.10, and even a simple operation like opening a file using gedit seems to run slow. Reinstalling xbindkeys does not seem to solve the problem. Anyone has any ideas on what could be done, or any alternatives that are easy to configure?

    Read the article

  • Runnin Framework 4.0 with Powershell

    - by Mike Koerner
    I had problems running scripts with Framework 4.0 assemblies I created.  The error I was getting was  Add-Type : Could not load file or assembly 'file:///C:\myDLL.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. I had to add the supported framework to the powershell.exe.config file.<supportedRuntime version="v4.0.30319"/>I still had a problem running the assembly so I had to recompile and set "Generate serialization Assembly" to off.

    Read the article

  • OS X, Mercurial and MediaTemple problem

    - by bschaeffer
    I've installed Mercurial per MT's knowledge base file here. Working with it server side using ssh from my Mac works fine. I can initialize repositories and the like, but pulling from the server or pushing from my Mac produces an error I don't understand. Here's what I get when call hg push from my local installation (hash marks represent my server number): remote: Traceback (most recent call last): remote: File "/home/#####/users/.home/data/mercurial-1.5/hg", line 27, in ? remote: mercurial.dispatch.run() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 16, in run remote: sys.exit(dispatch(sys.argv[1:])) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/dispatch.py", line 21, in dispatch remote: u = _ui.ui() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/ui.py", line 38, in __init__ remote: for f in util.rcpath(): remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1200, in rcpath remote: _rcpath = os_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/util.py", line 1174, in os_rcpath remote: path = system_rcpath() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 41, in system_rcpath remote: path.extend(rcfiles(os.path.dirname(sys.argv[0]) + remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/posix.py", line 30, in rcfiles remote: rcs.extend([os.path.join(rcdir, f) remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 75, in __getattribute__ remote: self._load() remote: File "/nfs/c05/h01/mnt/#####/data/mercurial-1.5/mercurial/demandimport.py", line 47, in _load remote: mod = _origimport(head, globals, locals) remote: ImportError: No module named osutil abort: no suitable response from remote hg! Mercurial on my Mac is configured as follows [ui] username = John Smith editor = te -w remotecmd = ~/data/mercurial-1.5/hg My local single repo is configured as follows (hash marks represent my server number): [paths] default = ssh://mysite.com@s#####.gridserver.com/domains/mysite.com/html Mercurial on the server is configured with a just a username: [ui] username = John Smith The server .bash_profile is configured as follows (per the installation guide): # Aliases alias ls-a='ls -a -l' # Added this as suggested by the MediaTemple guide export PYTHONPATH=${HOME}/lib/python:$PYTHONPATH export PATH=${HOME}/bin:$PATH I understand this probably isn't a MediaTemple problem, but more likely an installation problem. I would really appreciate any assitance on this problem. Thanks in advance!

    Read the article

  • git push merge error, but git pull is already up-to-date. Tried reclone, same problem.

    - by Jasie
    I do: git commit . git push error: Entry 'file.php' not uptodate. Cannot merge. Then I do git pull Already up-to-date. What do I do? I just want to get the latest version from the remote copy, and overwrite anything on my local copy. Edit: I tried everything. I deleted my local repo, and git clone ssh://[email protected]/directory ... Checking out files: 100%, done. git status On branch master nothing to commit (working directory clean) All looks good, right? Pull just in case. git pull Already up-to-date. I make a one line change in a file to see if I can push it. git commit . [master 1e18af1] Rando change 1 files changed, 2 insertions(+), 0 deletions(-) git push Counting objects: 13, done. Delta compression using up to 2 threads. Compressing objects: 100% (6/6), done. Writing objects: 100% (7/7), 646 bytes, done. Total 7 (delta 3), reused 0 (delta 0) From /directory d6d61aa..1e18af1 master -> origin/master error: Entry 'someotherfile.php' not uptodate. Cannot merge. Updating b8f9a54..1e18af1 To ssh://[email protected]/directory d6d61aa..1e18af1 master - master I have no idea what's going on! How can I commit/pull again normally? Thanks very much!

    Read the article

  • git push error 'remote rejected] master -> master (branch is currently checked out)'

    - by hap497
    Hi, Yesterday, I post a question regarding how to clone a git repository from 1 of my machine to another. http://stackoverflow.com/questions/2808177/how-can-i-git-clone-from-another-machine/2809612#2809612 I am able to successfully clone a git repository from my src (192.168.1.2) to my dest (192.168.1.1). But when I did an edit to a file and then do a 'git commit -a -m "test"' and then do a git push. I get this error on my dest (192.168.1.1): git push [email protected]'s password: Counting objects: 21, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (11/11), 1010 bytes, done. Total 11 (delta 9), reused 0 (delta 0) error: refusing to update checked out branch: refs/heads/master error: By default, updating the current branch in a non-bare repository error: is denied, because it will make the index and work tree inconsistent error: with what you pushed, and will require 'git reset --hard' to match error: the work tree to HEAD. error: error: You can set 'receive.denyCurrentBranch' configuration variable to error: 'ignore' or 'warn' in the remote repository to allow pushing into error: its current branch; however, this is not recommended unless you error: arranged to update its work tree to match what you pushed in some error: other way. error: error: To squelch this message and still keep the default behaviour, set error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To git+ssh://[email protected]/media/LINUXDATA/working ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'git+ssh://[email protected]/media/LINUXDATA/working' I have 2 version of git, will that causes this problem? I have git 1.7 on 192.168.1.2 (src) but git 1.5 on 192.168.1.1 (dest). I appreciate if someone can help me with this. Thank you.

    Read the article

  • How would the conversion of a custom CMS using a text-file-based database to Drupal be tackled?

    - by James Morris
    Just today I've started using Drupal for a site I'm designing/developing. For my own site http://jwm-art.net I wrote a user-unfriendly CMS in PHP. My brief experience with Drupal is making me want to convert from the CMS I wrote. A CMS whose sole method (other than comments) of automatically publishing content is by logging in via SSH and using NANO to create a plain text file in a format like so*: head<<END_HEAD title = Audio keywords= open,source,audio,sequencing,sampling,synthesis descr = Music, noise, and audio, created by James W. Morris. parent = home END_HEAD main<<END_MAIN text<<END_TEXT Digital music, noise, and audio made exclusively with @=xlink=http://www.linux-sound.org@:Linux Audio Software@_=@. END_TEXT image=gfb@--@;Accompanying image for penonpaper-c@right ilink=audio_2008 br= ilink=audio_2007 br= ilink=audio_2006 END_MAIN info=text<<END_TEXT I've been making PC based music since the early nineties - fortunately most of it only exists as tape recordings. END_TEXT ( http://jwm-art.net/dark.php?p=audio - There's just over 400 pages on there. ) *The jounal-entry form which takes some of the work out of it, has mysteriously broken. And it still required SSH access to copy the file to the main dat dir and to check I had actually remembered the format correctly and the code hadn't mis-formatted anything (which it always does). I don't want to drop all the old content (just some), but how much work would be involved in converting it, factoring into account I've been using Drupal for a day, have not written any PHP for a couple of years, and have zero knowledge of SQL? How might a team of developers tackle this? How do-able is it for one guy in his spare time?

    Read the article

  • git pull not working

    - by dorelal
    I am not using github. We have git setup on our machine. I created a branch from master called experiment. However when I am trying to do git pull I am getting following message. > git pull You asked me to pull without telling me which branch you want to merge with, and 'branch.experiment.merge' in your configuration file does not tell me either. Please specify which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details. Here is result of git remote show origin > git remote show origin * remote origin Fetch URL: ssh://git.domain.com/var/git/app.git Push URL: ssh://git.domain.com/var/git/app.git HEAD branch: master Remote branches: experiment tracked master tracked Local branches configured for 'git pull': master merges with remote master Local refs configured for 'git push': experiment pushes to experiment (local out of date) master pushes to master (up to date) As I read the message above experiment is mapped to origin/experiment. And my local repository knows that it is out of date. Then why I am not able to do git pull?

    Read the article

  • How many files in a directory is too many?

    - by Kip
    Does it matter how many files I keep in a single directory? If so, how many files in a directory is too many, and what are the impacts of having too many files? (This is on a Linux server.) Background: I have a photo album website, and every image uploaded is renamed to an 8-hex-digit id (say, a58f375c.jpg). This is to avoid filename conflicts (if lots of "IMG0001.JPG" files are uploaded, for example). The original filename and any useful metadata is stored in a database. Right now, I have somewhere around 1500 files in the images directory. This makes listing the files in the directory (through FTP or SSH client) take a few seconds. But I can't see that it has any affect other than that. In particular, there doesn't seem to be any impact on how quickly an image file is served to the user. I've thought about reducing the number of images by making 16 subdirectories: 0-9 and a-f. Then I'd move the images into the subdirectories based on what the first hex digit of the filename was. But I'm not sure that there's any reason to do so except for the occasional listing of the directory through FTP/SSH.

    Read the article

  • running bash scripts in php

    - by HDawg
    I have two computers. On the first computer I have apache running with all my web code. On the second computer I have large amounts of data stored with a retrieval script (the script usually takes hours to run). I am essentially creating a web UI to access this data without any time delay. so I call: exec("bash initial.bash"); this is a driver script that is in my Apache folder. It calls the script on the other computer. calling: ssh otherMachine temp.bash& this script invokes the data retrieval script on the second computer. If I call initial.bash in the terminal, everything works smoothly and successfully, but if I call it in my PHP file, then all my commands in initial.bash run, with the exception of ssh otherMachine temp.bash&. I put the & at the end of that, so that temp.bash will run in the background, since it does take a few hours to complete. I am not sure why the nested script is not running when invoked by Apache. Is there a better alternative than using exec or shell_exec to call a script, which ultimately calls another script. The reason I don't call a script on the second machine directly is because of the time it takes the program to run. Shell_exec does not render the php page until the script is complete.

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • NIS password mapping question

    - by papoyan
    I have NIS server with user "techsupport", which has uid/gid = 517 I've configured NIS and NFS on that server, as well as NFS/NIS client on the remote web server. Now I need to techsupport user to be able to login to web server using techsupport username, but HAVE root privileges. I need this, so I can easily track, which support agent doing what on the web server. Everything works fine, when from NIS server, I ssh to the web server with tech support user nisserver# ssh [email protected] I can authenticate against the NIS server just fine, and my home directory that is on NIS server, get's mounted on web server just fine. The Only two problems I have are : my GID on web server is webserver# id uid=517(techsupport) gid=517(client_jonny) groups=517(client_jonny) (as you can see, that it picked up gid of a client that exists on the web server, since it's same number) I need to make sure, that my "techsupport" user has ROOT privileges. How can I achieve this? I remember that I've seen identical results elsewhere, but LDAP was used, is there a way to achieve this with NIS/NFS setup? Thank you in advance,

    Read the article

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • php (rar) i want to rar a folder using rar on Ubuntu (linux) by php (on dedi server) noob

    - by Steve
    hey guyz i want rar (not tar) my folder on my server by using php RAR RAR 3.93 Copyright (c) 1993-2010 Alexander Roshal 15 Mar 2010 Registered to my real name OS Ubuntu Release (Karmic) kernel linux 2.6.32.2-xxxx-grs-ipv4-32 Gnome 2.28.1 latest php an lighthttpd i have tried these things http://php.net/manual/en/function.escapeshellarg.php // may be wrong code http://php.net/manual/en/function.exec.php http://php.net/manual/en/function.shell-exec.php my command (working in ssh and nautilus script) rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder php code $log=Shell_exec("rar a -m0 /where/file/will/saved/file_name.rar /location/ti/data/dir/datafolder"); echo $log; one method is left which i don't know how to use and its working on server that is by somefile_to_execute_command.sh i have to execute .sh file from php need to send some variables (command) and i tried this method can rar file with a script named RapidLeech but its rar from only its own files dir only :( but i want to do in different directories. Rapid Leech rar class http://paste2.org/p/791668 i m able run shell command with php (cp(copy),mv(move),ls(directory list),rm(remove aka delete)) but got failed to run rar i gives no output i also tried to given path rar and i used alot commands with php Shell_exec function and working like they work with ssh and i have tried almost 80 % method given on net and failed from last 3days i m over now plz help me i need php script file working plz reply if u have any info n code and experience about rar and this kinda :( problem i m 99% noob just used code mean search Google collect script make my own working thing (for personal use only) n now i m failed to rar folder and file :(( now plz provide me code plz don't talk in technical language because i m just reading my first php book (for dummies :D) mean noob and 0.1 plz help me as much u can thankx

    Read the article

  • .NET How would I build a DAL to meet my requirments?

    - by Jonno
    Assuming that I must deploy an asp.net app over the following 3 servers: 1) DB - not public 2) 'middle' - not public 3) Web server - public I am not allowed to connect from the web server to the DB directly. I must pass through 'middle' - this is purely to slow down an attacker if they breached the web server. All db access is via stored procedures. No table access. I simply want to provide the web server with a ado dataset (I know many will dislike this, but this is the requirement). Using asmx web services - it works, but XML serialisation is slow and it's an extra set of code to maintain and deploy. Using a ssh/vpn tunnel so that the one connects to the db 'via' the middle server, seems to remove any possible benefit of maintaining 'middle'. Using WCF binary/tcp removes the XML problem, but still there is extra code. Is there an approach that provides the ease of ssh/vpn, but the potential benefit of having the dal on the middle server? Many thanks.

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Capistrano + Git + DreamHost

    - by Michael Sync
    Hello, I'm trying to deploy my rails application by using Passenger and Capistrano on Dreamhost. I'm using Git as a version control and we bought an account from GitHub. I have installed all required gems, Passenger and Capistrano in my local machine and I have cloned the repository of my project from GitHub in my local machine as wel. According to Dreamhost support, they have Passenger, Ruby, Rails and etc on their server as well. I'm currently following this article http://github.com/guides/deploying-with-capistrano for my deployment. The following is my deploy.rb. default_run_options[:pty] = true ssh_options[:forward_agent] = true # be sure to change these set :user, 'gituser' set :domain, 'github.com' set :application, 'MyProjectOnGit' #[email protected]:MyProjectOnGit.git # the rest should be good set :repository, "[email protected]:MyProjectOnGit.git" set :deploy_to, "/ruby.michaelsync.net/" set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :git_shallow_clone, 1 set :scm_verbose, true set :use_sudo, false set :git_enable_submodules, 1 server domain, :app, :web role :db, domain, :primary => true set :ssh_options, { :forward_agent => true } namespace :deploy do task :restart do run "touch #{current_path}/tmp/restart.txt" end end When I run "cap deploy", I'm getting the error below. [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) Thanks in advance..

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >