Search Results

Search found 1395 results on 56 pages for 'repo'.

Page 13/56 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Installing Plugins from Cloud p2 repository in Eclipse IDE

    - by user1495036
    I have been recently reading a lot about p2 for a requirement of mine. Most of the p2 documentation online points to p2 for RCP. My requirement is for a plugin repo. I have a plugin that is used within Eclipse IDE. I dnt want to change the repo location but based on the Eclipse Version, if the user looks for Install New Software or Check for Updates it needs to download the respective plugins. My repo currently contains all the plugins for all the versions. but i need to everytime give a different URL to my user based on the Version. For e.g i am using Eclipse 3.7(Indigo). I install the plugin thru Install New Software by adding the p2 Repo URL. Now the user decides to for some requirement move to Eclipse 3.6, I want him to connect to the same p2 Repo URL and download the plugins created for Eclipse 3.6. This is definitely possible using p2 Discovery, or i could categorize the downloads using composite repository but i dnt want to do any of these. Just want to kno is there any API that i can hold on to, so that before processing the URL and finding the updates, i can check the version of Eclipse and redirect it based on the version to an internal URL. This is possible in RCP, want to kno if i can do it in Eclipse p2 UI. All the p2 UI looks to be internal classes. Any directives would be appreciated. Malai

    Read the article

  • Git svn - no changes, no branches (except master), rebase/info is not working

    - by ex3v
    I know that similar questions were asked before, but I think my is a little bit different, so please don't point me to existing threads. I'm migrating our old svn repo to git. I did git svn clone path --authors-file abc.txt and everything seemend legit to me. Then I did git remote add origin xyz and git push --all origin and it also worked. I created this repo as test one, with only me having access to both local repo and origin. No changes were made in project held on this repo, nothing to commit, no pushing and so on. There is also only one branch, because someone initialized svn years ago without creating proper folder structure (branches, trunk, tags). Meanwhile someone pushed their work to svn, so I tried to git svn fetch (which worked), and git svn rebase which didn't, giving me error: Unable to determine upstream SVN information from working tree history Is there any reason why git svn decided to stop working?

    Read the article

  • Changing git origin to point to an existing repository

    - by int3
    I'd like to make my local repo point to a different fork of the same project. Will this work? Do a merge with the 'target origin' Change the origin repo in my config file to the 'target origin' Also, if my local repo is not entirely identical to the new origin (say, I've resolved some merge conflicts in my favor), will these changes be pushed to the new origin when I do a git push, or will only commits made after the change of origin get pushed?

    Read the article

  • Is this plain stupid: GIT Sharing Via DropBox?

    - by yar
    I realize that there are similar questions, but my question is slightly different. I'm wondering whether sharing a bare repository via a synchronized DropBox folder on multiple computers would work for sharing code via GIT. Really what I want to know is: is sharing a GIT repo via DropBox (the repo gets updated on each person's local drive) the same as sharing it from one centralized location, e.g., via SSH, git or HTTP? Is this the same or different from sharing a GIT repo via a shared network drive? Note: This is not an empirical question: it seems to work fine. I'm asking whether the way a GIT repo is structured is compatible with this way of sharing. EDIT To clarify/repeat, I'm talking about keeping the GIT repository on DropBox as a bare repository. I'm not talking about keeping the actual files that are under source control in DropBox.

    Read the article

  • Mercurial commit only tip

    - by kiw
    In my setup I have a central Hg repo to which I'm pushing my local changes. Say in my local clone I have a series of local commits and then I want to push the changes to the central repo. How can I push only the final state without including all of the "small" local commits that I made? I want this because sometimes I dont want to pollute the central repo's history with all of the small local commits that I made.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • How to subscribe to the free Oracle Linux errata yum repositories

    - by Lenz Grimmer
    Now that updates and errata for Oracle Linux are available for free (both as in beer and freedom), here's a quick HOWTO on how to subscribe your Oracle Linux system to the newly added yum repositories on our public yum server, assuming that you just installed Oracle Linux from scratch, e.g. by using the installation media (ISO images) available from the Oracle Software Delivery Cloud You need to download the appropriate yum repository configuration file from the public yum server and install it in the yum repository directory. For Oracle Linux 6, the process would look as follows: as the root user, run the following command: [root@oraclelinux62 ~]# wget http://public-yum.oracle.com/public-yum-ol6.repo \ -P /etc/yum.repos.d/ --2012-03-23 00:18:25-- http://public-yum.oracle.com/public-yum-ol6.repo Resolving public-yum.oracle.com... 141.146.44.34 Connecting to public-yum.oracle.com|141.146.44.34|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1461 (1.4K) [text/plain] Saving to: “/etc/yum.repos.d/public-yum-ol6.repo” 100%[=================================================>] 1,461 --.-K/s in 0s 2012-03-23 00:18:26 (37.1 MB/s) - “/etc/yum.repos.d/public-yum-ol6.repo” saved [1461/1461] For Oracle Linux 5, the file name would be public-yum-ol5.repo in the URL above instead. The "_latest" repositories that contain the errata packages are already enabled by default — you can simply pull in all available updates by running "yum update" next: [root@oraclelinux62 ~]# yum update Loaded plugins: refresh-packagekit, security ol6_latest | 1.1 kB 00:00 ol6_latest/primary | 15 MB 00:42 ol6_latest 14643/14643 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package at.x86_64 0:3.1.10-43.el6 will be updated ---> Package at.x86_64 0:3.1.10-43.el6_2.1 will be an update ---> Package autofs.x86_64 1:5.0.5-39.el6 will be updated ---> Package autofs.x86_64 1:5.0.5-39.el6_2.1 will be an update ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6 will be updated ---> Package bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 will be an update ---> Package cvs.x86_64 0:1.11.23-11.el6_0.1 will be updated ---> Package cvs.x86_64 0:1.11.23-11.el6_2.1 will be an update [...] ---> Package yum.noarch 0:3.2.29-22.0.1.el6 will be updated ---> Package yum.noarch 0:3.2.29-22.0.2.el6_2.2 will be an update ---> Package yum-plugin-security.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 will be an update ---> Package yum-utils.noarch 0:1.1.30-10.el6 will be updated ---> Package yum-utils.noarch 0:1.1.30-10.0.1.el6 will be an update --> Finished Dependency Resolution Dependencies Resolved ===================================================================================== Package Arch Version Repository Size ===================================================================================== Installing: kernel x86_64 2.6.32-220.7.1.el6 ol6_latest 24 M kernel-uek x86_64 2.6.32-300.11.1.el6uek ol6_latest 21 M kernel-uek-devel x86_64 2.6.32-300.11.1.el6uek ol6_latest 6.3 M Updating: at x86_64 3.1.10-43.el6_2.1 ol6_latest 60 k autofs x86_64 1:5.0.5-39.el6_2.1 ol6_latest 470 k bind-libs x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 839 k bind-utils x86_64 32:9.7.3-8.P3.el6_2.2 ol6_latest 178 k cvs x86_64 1.11.23-11.el6_2.1 ol6_latest 711 k [...] xulrunner x86_64 10.0.3-1.0.1.el6_2 ol6_latest 12 M yelp x86_64 2.28.1-13.el6_2 ol6_latest 778 k yum noarch 3.2.29-22.0.2.el6_2.2 ol6_latest 987 k yum-plugin-security noarch 1.1.30-10.0.1.el6 ol6_latest 36 k yum-utils noarch 1.1.30-10.0.1.el6 ol6_latest 94 k Transaction Summary ===================================================================================== Install 3 Package(s) Upgrade 96 Package(s) Total download size: 173 M Is this ok [y/N]: y Downloading Packages: (1/99): at-3.1.10-43.el6_2.1.x86_64.rpm | 60 kB 00:00 (2/99): autofs-5.0.5-39.el6_2.1.x86_64.rpm | 470 kB 00:01 (3/99): bind-libs-9.7.3-8.P3.el6_2.2.x86_64.rpm | 839 kB 00:02 (4/99): bind-utils-9.7.3-8.P3.el6_2.2.x86_64.rpm | 178 kB 00:00 [...] (96/99): yelp-2.28.1-13.el6_2.x86_64.rpm | 778 kB 00:02 (97/99): yum-3.2.29-22.0.2.el6_2.2.noarch.rpm | 987 kB 00:03 (98/99): yum-plugin-security-1.1.30-10.0.1.el6.noarch.rpm | 36 kB 00:00 (99/99): yum-utils-1.1.30-10.0.1.el6.noarch.rpm | 94 kB 00:00 ------------------------------------------------------------------------------------- Total 306 kB/s | 173 MB 09:38 warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Retrieving key from http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Importing GPG key 0xEC551F03: Userid: "Oracle OSS group (Open Source Software group) " From : http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : yum-3.2.29-22.0.2.el6_2.2.noarch 1/195 Updating : xorg-x11-server-common-1.10.4-6.el6_2.3.x86_64 2/195 Updating : kernel-uek-headers-2.6.32-300.11.1.el6uek.x86_64 3/195 Updating : 12:dhcp-common-4.1.1-25.P1.el6_2.1.x86_64 4/195 Updating : tzdata-java-2011n-2.el6.noarch 5/195 Updating : tzdata-2011n-2.el6.noarch 6/195 Updating : glibc-common-2.12-1.47.el6_2.9.x86_64 7/195 Updating : glibc-2.12-1.47.el6_2.9.x86_64 8/195 [...] Cleanup : kernel-firmware-2.6.32-220.el6.noarch 191/195 Cleanup : kernel-uek-firmware-2.6.32-300.3.1.el6uek.noarch 192/195 Cleanup : glibc-common-2.12-1.47.el6.x86_64 193/195 Cleanup : glibc-2.12-1.47.el6.x86_64 194/195 Cleanup : tzdata-2011l-4.el6.noarch 195/195 Installed: kernel.x86_64 0:2.6.32-220.7.1.el6 kernel-uek.x86_64 0:2.6.32-300.11.1.el6uek kernel-uek-devel.x86_64 0:2.6.32-300.11.1.el6uek Updated: at.x86_64 0:3.1.10-43.el6_2.1 autofs.x86_64 1:5.0.5-39.el6_2.1 bind-libs.x86_64 32:9.7.3-8.P3.el6_2.2 bind-utils.x86_64 32:9.7.3-8.P3.el6_2.2 cvs.x86_64 0:1.11.23-11.el6_2.1 dhclient.x86_64 12:4.1.1-25.P1.el6_2.1 [...] xorg-x11-server-common.x86_64 0:1.10.4-6.el6_2.3 xulrunner.x86_64 0:10.0.3-1.0.1.el6_2 yelp.x86_64 0:2.28.1-13.el6_2 yum.noarch 0:3.2.29-22.0.2.el6_2.2 yum-plugin-security.noarch 0:1.1.30-10.0.1.el6 yum-utils.noarch 0:1.1.30-10.0.1.el6 Complete! At this point, your system is fully up to date. As the kernel was updated as well, a reboot is the recommended next action. If you want to install the latest release of the Unbreakable Enterprise Kernel Release 2 as well, you need to edit the .repo file and enable the respective yum repository (e.g. "ol6_UEK_latest" for Oracle Linux 6 and "ol5_UEK_latest" for Oracle Linux 5) manually, by setting enabled to "1". The next yum update run will download and install the second release of the Unbreakable Enterprise Kernel, which will be enabled after the next reboot. -Lenz

    Read the article

  • Thinktecture.IdentityModel.Http and the ASP.NET Web API CodePlex bits

    - by Your DisplayName here!
    I will keep the github repo in sync with the major releases of Web API (like Beta, RC, RTM). Because of the changes made to Web API after beta, my current bits don’t build against the CodePlex version anymore. Today I installed a build environment for the CodePlex bits, and migrated my code. It turns out the changes are pretty easy: Simply replace Request.GetUserPrincipal() with Thread.CurrentPrincipal ;) I will update the repo when RC comes out.

    Read the article

  • Structure of a Git repository

    - by Luke Puplett
    Sorry if this is a duplicate, I looked. We're moving to Git. In Subversion, I'm used to having \trunk, \branches and \tags folders. With Git, switching between branches will replace the contents of the working directory, so am I right to assume that the way we used to work just doesn't apply with Git? My guess is that I'd have a repo folder with maybe a gitignore and readme.txt, then the folders for the projects that make up the repo, and that's it.

    Read the article

  • Using Cpp Unit with visual studio 2010 [closed]

    - by Deepak
    I have downloaded "cppunit-cvs-repo-archive.tar.bz2" from http://sourceforge.net/projects/cppunit/ Now after unzipping the above .tar.bz2 what to do next? On searching on internet, it is mentioned that open the CppUnitLibraries.dsw project under cppunit-cvs-repo-archive\cppunit\src folder but the same file is existing with name "CppUnitLibraries.dsw,v" and on changing its extension to .dsw and on opening again it displays the message invalid project file.

    Read the article

  • apt-get update specific list?

    - by ohshazbot
    I'm working on a scripted install of something, and one of the parts is adding an Apache Bigtop repo and installing things from it. But having to do an apt-get update before installing is quite time consuming because of all of the other mirrors touched (and I do an update before hand to make sure the system has previously used tools in the script). Is there either a way to do an update that doesn't update everything or a way to do an install and specify a repo/list that hasn't been updated?

    Read the article

  • Samba server NETBIOS name not resolving, WINS support not working

    - by Eric
    When I try to connect to my CentOS 6.2 x86_64 server's samba shares using address \\REPO (NETBIOS name of REPO), it times out and shows an error; if I do so directly via IP, it works fine. Furthermore, my server does not work correctly as a WINS server despite my samba settings being correct for it (see below for details). If I stop the iptables service, things work properly. I'm using this page as a reference for which ports to use: http://www.samba.org/samba/docs/server_security.html Specifically: UDP/137 - used by nmbd UDP/138 - used by nmbd TCP/139 - used by smbd TCP/445 - used by smbd I really really really want to keep the secure iptables design I have below but just fix this particular problem. SMB.CONF [global] netbios name = REPO workgroup = AWESOME security = user encrypt passwords = yes # Use the native linux password database #passdb backend = tdbsam # Be a WINS server wins support = yes # Make this server a master browser local master = yes preferred master = yes os level = 65 # Disable print support load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Restrict who can access the shares hosts allow = 127.0.0. 10.1.1. [public] path = /mnt/repo/public create mode = 0640 directory mode = 0750 writable = yes valid users = mangs repoman IPTABLES CONFIGURE SCRIPT # Remove all existing rules iptables -F # Set default chain policies iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT DROP # Allow incoming SSH iptables -A INPUT -i eth0 -p tcp --dport 22222 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22222 -m state --state ESTABLISHED -j ACCEPT # Allow incoming HTTP #iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # Allow incoming Samba iptables -A INPUT -i eth0 -p udp --dport 137 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 137 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p udp --dport 138 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 138 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 139 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 139 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -i eth0 -p tcp --dport 445 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 445 -m state --state ESTABLISHED -j ACCEPT # Make these rules permanent service iptables save service iptables restart**strong text**

    Read the article

  • Why won't this script accept any arguments?

    - by Nate Wagar
    I'm trying to write an SVN post-commit hook and, strangely, am getting hung up on what should be the easiest part. The Script: set REPO="$1" set REV="$2" set SVNBIN="/opt/CollabNet_Subversion/bin/" set SSHBIN="/usr/bin/ssh" set HOST="staging.domain.net" set timeout=30 set USERNAME="svn-usr" set E_NO_CONNECT=2 set E_WRONG_PASS=3 set E_UNKOWN=25 set CHANGED=`"$SVNBIN"svnlook changed --revision $REV $REPOS` echo "Here are changes: $CHANGED" >> /var/svn/repos/www/logs/testing echo "Command: $0; Repo: $REPO; Rev: $REV; Total: $#" >> /var/svn/repos/www/logs/testing set PROJECT "" Yet when I call it, it doesn't seem to be seeing the arguments I pass to it: /var/svn/repos/www/logs> sudo ../hooks/post-commit /var/svn/repos/www 33 svnlook: missing argument: --revision Type 'svnlook help' for usage. /var/svn/repos/www/logs> cat testing Here are changes: Command: ../hooks/post-commit; Repo: ; Rev: ; Total: 1 This is on a Solaris 10 SPARC box. I'm a bit of a script newbie, but shouldn't this be really easy??

    Read the article

  • OSX: sync Documents folder to Dropbox with version control

    - by James Porter
    I have ample storage in Dropbox to sync my entire OSX Documents folder, and I'd like to this just so that I have it anywhere I go. I found this question, which describes a method for doing this with symlinks. Seems good, the only problem is that it would be nice also to have everything under version control. I thought perhaps a better solution would be to set up my Documents folder as a git repo with a remote that I would push to in my Dropbox folder. Alternatively, just set up Documents as a git repo with no remote and then symlink it to Dropbox. Which of these two alternatives is preferable? What are some pitfalls I might not be thinking of with each? It also has occurred to me that some of the subdirectories of Documents are themselves git repos with github remotes. Would it cause problems for these subdirectories if I made Documents a git repo? If so, how do I get around this? Would making Documents an svn repo instead help? Is there a way to set up git so that this is not an issue?

    Read the article

  • Mercurial internal Setup on Windows 7 - Exception happened during processing of request from ...

    - by Sad0w1nL1ght
    Hy, i have 1 central repository and many locals. On my machine i have local and a central repository too. I can make clone/commit/update/push/pull very easy between the local and central repository on my local machine. but when i want to make a clone from another machine it gets an error. listening at http://MyLocalMachine:8000/ (bound to *:8000) ---------------------------------------- Exception happened during processing of request from ('192.168.0.194', 49319) Traceback (most recent call last): File "SocketServer.pyc", line 558, in process_request_thread File "SocketServer.pyc", line 320, in finish_request File "mercurial\hgweb\server.pyc", line 47, in __init__ File "SocketServer.pyc", line 615, in __init__ File "BaseHTTPServer.pyc", line 329, in handle File "BaseHTTPServer.pyc", line 323, in handle_one_request File "mercurial\hgweb\server.pyc", line 79, in do_GET File "mercurial\hgweb\server.pyc", line 70, in do_POST File "mercurial\hgweb\server.pyc", line 63, in do_write File "mercurial\hgweb\server.pyc", line 127, in do_hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 86, in __call__ File "mercurial\hgweb\hgweb_mod.pyc", line 118, in run_wsgi ErrorResponse ---------------------------------------- The command line wich started the central repo: hg serve -R TT -n TTZoli The command from remote machine for cloning: hg clone --pull http://MyLocalMachine:8000/TT Config for the central repo: [ui] username = MyLocalUserName username = test <[email protected]> with this user i'm trying to acces the central repo [web] push_ssl = false Config for the remote repo: [ui] username = test <[email protected]> [web] push_ssl = false I'm not sure if it's relevant,my firewall is turned off on both machines, and the files in /hg folder are not versioned on the server, except hgignore. Could you please suggest some ideas? What could be the problem? Thanks in advance!

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Gitolite SSH URL Format

    - by KPthunder
    So I got gitolite set up. Simple. But there is one issue I am having. The SSH urls follow the format of git@host:repo. I'm used to Bitbucket / Github where the urls follow the format of git@host:user/repo. Is there a way to get the latter format using gitolite? Another question. I have my ~/.ssh/config file set up with the following entry: Host <host> User <user> IdentityFile <path/to/public/key> I don't have any configuration specifying git as a user, and yet I am able to clone git@host:repo without problem. Obviously, my ssh client is using my public key to access the server which is why gitolite is letting me clone the repo, but how does my ssh client know to use my public key which is only configured for the <user> user and not the git user?

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • luntbuild + maven + findbugs = OutOfMemoryException

    - by Johannes
    Hi, I've been trying to get Luntbuild to generate and publish a project site for our project including a Findbugs report. All other reports (Cobertura, Surefire, JavaDoc, Dashboard) work fine, but Findbugs bails out with an OutOfMemoryException. Excluding findbugs from report generation fixes the build --- although obviously without a Findbugs report. The funny thing is that I first encountered this problem locally and solved it by setting MAVEN_OPTS=-Xmx512m. This does not seem to be enough in Luntbuild, however: setting that exact same option as an environment variable of my builder doesn't make a difference. I've found a couple of posts on the 'net stating you should also add -XX:MaxPermSize=512m to MAVEN_OPTS and/or pass -Dmaven.findbugs.jvmargs=-Xmx512m to mvn.bat. None of these (or their combination) seem to help though so any hints would be greatly appreciated! Cheers, Johannes Relevant information: Luntbuild is 1.5.6, Maven is 2.1.0, findbugs-maven-plugin is 2.0.1. This is the Findbugs section of the relevant pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.0.1</version> </plugin> This is the head of my build log: User "luntbuild" started the build Perform checkout operation for VCS setting: Vcs name: Subversion Repository url base: http://some.repository.com/repo/ Repository layout: multiple Directory for trunk: trunk Directory for branches: branches Directory for tags: tags Username: xxxx Password:xxxx Web interface: ViewVC URL to web interface: http://some.repository.com/repo/ Quiet period: modules: Source path: somepath, Branch: , Label: , Destination path: somewhere Source path: somepath, Branch: somewhere1.0.x, Label: , Destination path: somewhere-1.0.x Source path: somepath, Branch: somewhere1.1.x, Label: , Destination path: somewhere-1.1.x Update url: http://some.repository.com/repo//trunk Duration of the checkout operation: 0 minutes Perform build with builder setting: Builder name: default Builder type: Maven2 builder Command to run Maven2: "C:\maven\apache-maven-2.1.0\bin\mvn.bat" -e -f somewhere\pom.xml -P site -Dmaven.test.skip=false -DbuildDate="Tue Nov 24 11:13:24 CET 2009" -DbuildVersion="site-core138" -Dsvn.username=xxxx -Dsvn.password=xxxx -DstagingSiteURL=file:///C:/luntbuild/core-reports -Dmaven.findbugs.jvmargs=-Xmx512m Directory to run Maven2 in: Goals to build: site:stage site:stage-deploy Build properties: buildVersion="site-core138" artifactsDir="C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts" buildDate="Tue Nov 24 11:13:24 CET 2009" junitHtmlReportDir="" Environment variables: MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=512m" Build success condition: result==0 and builderLogContainsLine("INFO","BUILD SUCCESSFUL") Execute command: Executing 'C:\maven\apache-maven-2.1.0\bin\mvn.bat' with arguments: '-e' '-f' 'somewhere\pom.xml' '-P' 'site' '-Dmaven.test.skip=false' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-DbuildVersion=site-core138' '-Dsvn.username=xxxxxx' '-Dsvn.password=xxxxxx' '-DstagingSiteURL=file:///C:/luntbuild/reports' '-Dmaven.findbugs.jvmargs=-Xmx512m' '-DbuildVersion=site-core138' '-DartifactsDir=C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-X' 'site:stage' 'site:stage-deploy' Tail of my build log: Analyzed: C:\luntbuild\somewhere-work\somewhere\...\SomeClass.class ... Analyzed: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: c:\maven\local-repo\...\somejar-1.1.1.1-SNAPSHOT.jar Aux: c:\maven\local-repo\commons-lang\commons-lang\2.3\commons-lang-2.3.jar .... Aux: c:\maven\local-repo\org\openoffice\ridl\3.1.0\ridl-3.1.0.jar Aux: c:\maven\local-repo\org\openoffice\unoil\3.1.0\unoil-3.1.0.jar [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Java heap space [INFO] ------------------------------------------------------------------------ [DEBUG] Trace java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.(HashMap.java:209) at edu.umd.cs.findbugs.ba.type.TypeAnalysis$CachedExceptionSet.(TypeAnalysis.java:114) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.getCachedExceptionSet(TypeAnalysis.java:688) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.computeThrownExceptionTypes(TypeAnalysis.java:439) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:411) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:89) at edu.umd.cs.findbugs.ba.Dataflow.execute(Dataflow.java:356) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:82) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:44) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:173) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:64) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysis(ClassContext.java:937) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysisNoDataflowAnalysisException(ClassContext.java:921) at edu.umd.cs.findbugs.ba.ClassContext.getCFG(ClassContext.java:326) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.analyzeMethod(BuildUnconditionalParamDerefDatabase.java:103) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.considerMethod(BuildUnconditionalParamDerefDatabase.java:93) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.visitClassContext(BuildUnconditionalParamDerefDatabase.java:79) at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68) at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:971) at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:230) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:912) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:756) [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 16 seconds [INFO] Finished at: Tue Nov 24 11:31:23 CET 2009 [INFO] Final Memory: 70M/127M [INFO] ------------------------------------------------------------------------ Maven2 builder failed: build success condition not met! Note that apparently maven only uses 70MB... but that probably doesn't mean anything since the Findbugs plugin forks its own process.

    Read the article

  • FFMpeg installation problem

    - by Thomas
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • FFMpeg installation problem

    - by Thomas
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • Cannot get git working

    - by Devin Dixon
    I'm trying to install my own git server with these instructions. http://cisight.com/how-to-setup-git-server-using-gitolite-in-ubuntu-11-10-oneiric/ But I am get stuck at this point. git clone --verbose [email protected]:testing.git Cloning into 'testing'... Permission denied (publickey). fatal: The remote end hung up unexpectedly And I think it has something to do with this: gitolite@ip-xxxx:~$ gl-setup tmp/john.pub key_read: uudecode Aklkdfgkldkgldkgldkgfdlkgldkgdlfkgldkgldkgdlkgkfdnknbkdnbkdnbkdnbkfnbkdfnbkdnfbkdfnbdknbkdnbkfnbkdbnkdbnkdfnbkd [email protected] failed fprint failed I always get the fail and I think its preventing me from cloning repo.The repo is there along with gitolite-admin.git repo. The permissions are this: drwxr-x--- 8 gitolite gitolite 4096 Jun 6 16:29 gitolite-admin.git drwxr-x--- 7 gitolite gitolite 4096 Jun 6 16:29 testing.git So my question is what am I missing here?

    Read the article

  • git svn on multiple machines

    - by stgtscc
    My repo is SVN and I'm using git-svn to interface with it which has been working out well. I'm working on the code base from a few different machines and appreciate some insight as to what the best setup might be for me going forward. I'd like to use git primarily but I need to commit to svn (via git svn dcommit) and pull from svn (git svn rebase) periodically from potentially any of the machines. Is it possible to perhaps have git svn setup on all but somehow push and pull changes between the instances? Or should I setup a bare repo and use that as the central git repo? How would that tie in to git svn? Any insight is appreciated.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >