Search Results

Search found 1395 results on 56 pages for 'repo'.

Page 16/56 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How to abandon a hg merge?

    - by Grumdrig
    I'm new to collaborating with Mercurial. My situation: Another programmer changed rev 1 of a file to replace 4-space indents with 2-space indent. (I.e. changed every line.) Call that rev 2, pushed to the remote repo. I've committed substantive changes rev 1 with various code changes in my local workspace. Call that rev 3. I've hg pulled and hg merged without a clear idea of what was going on. The conflicts are myriad and not really substantive. So I really wish I'd changed my local repo to 2-space indents before merging; then the merge will be trivial (i'm supposing). But I can't seem to back up. I think I need to hg update -r 3 but it says abort: outstanding uncommitted merges. How can I undo the merge, changes spacing in my local repo, and remerge?

    Read the article

  • Android Source code download error

    - by user351850
    Hi all I have followed the instructions on the Android website on how to download the latest android source code files but it gives errors when i run this command: repo init -u git://android2.git.kernel.org/platform/manifest.git It gives the following error: Getting repo ... from git://android.git.kernel.org/tools/repo.git android.git.kernel.org[0: 199.6.1.176]: errno=Connection refused android.git.kernel.org[0: 130.239.17.12]: errno=Connection refused fatal: unable to connect a socket (Connection refused) On checking forums for its resolution, i was told that port 9418 was being blocked. I use Ubuntu 10.04 and ensured that the firewall wasnt blocking the port and also enabled the port and the above IP addresses. I also spoke to the networking peeps who ensured that no traffic from the internet is being blocked. I would be glad if i could get directions on how to proceed next. Many thanks as you respond. Saheed.

    Read the article

  • TFS get command erroneously returns "All files are up to date."

    - by NathanE
    We are just in the process of migrating our TFS repo to Mercurial as we've had enough of TFS. Unfortunately TFS has thrown us one last curve ball before it lets us go. We've wrote a script that we intend to have "get" each changeset (including timestamp, check-in comment etc) and then add them to the Mercurial repo and check it in. Unfortunately TFS is acting very strange when we execute the tf get * /version:C111 /overwrite command. It immediately returns "All files are up to date." But this is impossible. The workspace folder is empty! And viewing the details for the 111 changeset quite clearly shows that the changeset contains "stuff" i.e. the repo is certainly not empty. What could be causing this?

    Read the article

  • Git in terminal

    - by goodcow
    I tried making my first repo on github. I copy pasted their code while on the directory of my entire system (I think that was a mistake). As a result, the terminal line always says ~ git:(master) ? before every command. It does not go away even when I quit terminal. I am using zsh. The code I pasted was: touch README.md git init git add README.md git commit -m "first commit" git remote add origin https://github.com/***/***.git git push -u origin master On top of that, I can't even seem to figure out how to add my files to the repo. Help on how to not always have git:(master) before every bash command and how to make a repo? Thanks!

    Read the article

  • does a git repository have its own local value for core.autocrlf that overrides the global one?

    - by Warren P
    As per this question, I understand that core.autocrlf=true in git will cause CRLF to LF translations. However when I type : git config core.autocrlf I see: false However, when I stage modified files that are already in the repo, I still get these warnings: Warning: CRLF will be replaced by LF in File1.X. The file will have its original line endings in your working directory. My guess is that the repo copy of the file is already set to "autocrlf=true". Questions: A. How do I query whether a file or git repo is already forcing AutoCrlf? B. How do I turn it autocrlf off?

    Read the article

  • How to permanently prevent specific part of a file from being committed in git?

    - by boutta
    I have cloned a remote SVN repository with git-svn. I have modified a pom.xml file in this cloned repo in a way that the code compiles. This setup is exclusive for me. Thus I don't want to push the changes back on the remote repo. Is there a way to prevent this (partial) change of a file from being committed into the repo? I'm aware of the fact, that I could use a personal branch, but this would mean certain merging overhead. Are there other ways? I've looked into this question and this one, but they are for rather temporal changes. Update: I'm also aware of the .gitignore possibilities, but this would mean to exclude the file completely.

    Read the article

  • Multiple svn projects into one git repository?

    - by trondgzi
    Hi, I have started to use git-svn for some of my work to be able to do local commits. This works great for projects that use standard svn layout. Recently I started working on a Java project that is split into multiple connected modules (20-25), and each module have its own root folder in the same svn repo with its own trunk/branches/tags. svnrepo/ module-1 trunk branches tags module-N trunk branches tags I have cloned each and every module with git svn clone -s /path/to/svnrepo/module[1-N]. The "problem" is that when I want to do git svn rebase on all modules i have to do it N times. I have tried to do git svn clone /path/to/svnrepo/ do avoid doing the rebase operation N times, but that leaves me with a directory layout that is the same as in the svn repo. Is there a way that I can track all the trunks of all modules in one git repo? So that I get a directory layout like this within my git repository: module-1 module-2 module-N

    Read the article

  • Why do I have to use the "origin" for the pull to be successfull

    - by yan bellavance
    when I do : git pull BranchName it tells me everything is up to date but I know that is not true. When I do: git pull origin BranchName then I get the files I was expecting. Is there an easy way to answer this or do I need to provide more details. PS One thing I did do just to understant themechanics of git is give the branch name in my cloned repo a different name than on the remote repo. I did however put the right name in the config file like so: [branch "myUDPspinoff"] remote = origin merge = refs/heads/UDPspinoff this worked before on another repo but not this one. And when I put everything in the same name thenI did not need to use origin anymore.

    Read the article

  • Unable to checkout svn repositories

    - by lucaghera
    I have an ubuntu 12.04 machine were apache2 is set up with SSL certificates. In the same machine there is a SVN server. It all worked great till the update to 12.04. Now I'm able to access the svn via a web-browser and also by using an eclipse plugin (subversive), but I'm not able to access the svn via command line. When I try to check out a repo from a Mac Os X client it returns: svn: E120171: Unable to connect to a repository at URL 'https://IP/svn/repo_name' svn: E120171: Error running context: An error occurred during SSL communication If I try to check out a repo from an Ubuntu client it returns: svn: OPTIONS of 'https://IP/svn/repo_name': SSL handshake failed: SSL error: A TLS warning alert has been received. (https://IP)

    Read the article

  • Best/Bad practices for code sharing?

    - by sunpech
    The more I explore Github, the more I like it. I really enjoy how coding is becoming more social. I'm curious as to if there are any bad practices that programmers should avoid in sharing their code with each other. And in naming bad practices, what are the best practices for code sharing? For example: Is it a bad practice for a single repo to have multiple scripts/projects named 'MiscProjects'? Where this repo, as the name suggest, is a collection of miscellaneous small scripts and projects. This may resemble how a programmer organizes projects on his/her local storage, but it's possibly not optimal for code sharing? Maybe if a good README/documentation is done, it would be better? Or as long as it's well documented, anything goes?

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Using branchs for a mini project or module of project: Good practice?

    - by TheLQ
    In my repo I have 3 closely related mini projects: 1 server and 2 clients. They are all quite small (<3 files each). Since they are so small and so closely related I just dropped them in folders in one single repo. However now that I know I can't clone a single directory in my VCS of choice (Mercurial), I'm considering splitting them up. However I'm confused about general best practice: Is it okay to put different small projects in different branches, or should they all go in different repos? I'm currently leaning towards branching since I can't easily splice out the file history of the different projects but then your using a feature in a way it wasn't meant to be used.

    Read the article

  • How to install Percona Xtrabackup to Ubuntu 12.04LTS?

    - by coding crow
    I am trying to install Percona Xtrabackup to my Ubuntu 12.04 LTS insatlled on Amazon EC2. I am trying to follow instruction on the Xtrabackup installation page here. The instruction follows as Add this to /etc/apt/sources.list, replacing squeeze with the name of your distribution: deb http://repo.percona.com/apt squeeze main deb-src http://repo.percona.com/apt squeeze main In my case I will replace squeeze with precise but when I open /etc/apt/sources.list for editing it says the following It is suggestion three alternatives instead of editing which are listed a.), b.) and c.). My Question What should I do to install Percona Xtrabackup to my box?

    Read the article

  • Can I migrate a clone of Google Code repository into Github?

    - by David Conde
    I want to create a clone of a Google Code repository, which I cannot download due to Country restrictions and I want to migrate that clone into Github, which I can use without any problem. The thing is I have a Github account and I can browse through GoogleCode but I cannot take my TortoiseHg and clone a repo just like that because I'm from Cuba and I get a lovely Google page saying that I cannot go into Google code. I'm guessing you know how I manage to browse :) I would like to import a mercurial repository into my Github repo, my questions: Is it possible? How can I do it?

    Read the article

  • Using EPEL repos with Oracle Linux

    - by wcoekaer
    There's a Fedora project called EPEL which hosts a set of additional packages that can be installed on top of various distributions such as Red Hat Enterprise Linux, CentOS, Scientific Linux and of course also Oracle Linux. These packages are not distributed by the distribution vendor and as such also not supported by the vendors (including Oracle) however for users that want to pick up some extras that are useful, it's very easy to do this. All you need to do is download the EPEL RPM from the website, install it on Oracle Linux 5 or Oracle Linux 6 and run yum install or yum search to get the packages. example : # wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm # rpm -ivh epel-release-6-5.noarch.rpm # yum repolist Loaded plugins: refresh-packagekit, rhnplugin repo id repo name status epel Extra Packages for Enterprise Linux 6 - x86_64 7,124 The folks that build these repositories are doing a great job at adding very useful packages. They are free, but also unsupported of course.

    Read the article

  • How to organise projects with dependencies on BitBucket?

    - by Timwi
    Both Mercurial and BitBucket make one fundamental assumption: 1 repo = 1 project. If I have a project that has a dependency (a library) which is shared by many projects, this assumption gets in the way. Now it is no longer possible to have a separate BitBucket page for each project while still being able to commit atomic revisions to multiple projects. If I put all the projects into one repo, they all become one “project” on BitBucket. If I put them in separate repos, it is no longer possible to know which version of the library project was in use at revision X of a dependent project. How is this situation normally solved on BitBucket, or is there explicitly no support for this common scenario?

    Read the article

  • Why is this by passing the SUDO password?

    - by John Isaacks
    I have a bash script I am using to automate a SVN checkout. The contents of the file were: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ Then when I double click the file it would ask me what to do, I would click the button "Run In Terminal" and then a terminal would pop up and ask me for the SUDO password. I would enter it, the script would execute and the terminal would close. I wanted to give some sort of indication that the script ran successfully so I edited my file to look like: #!/bin/bash cd /var/www-cake sudo svn checkout file:///usr/local/svn/bash_repo/repo/ echo "Head revision has been pushed to live server" I expected the terminal to now stay open and tell me the message afterwards. To my surprise it now opens and immediately closes. The script does execute and I no longer have to put in the SUDO password. Is this right? I do not understand why this is happening, seems like a security issue.

    Read the article

  • Using branches for a mini project or module of project: Good practice?

    - by TheLQ
    In my repo I have 3 closely related mini projects: 1 server and 2 clients. They are all quite small (<3 files each). Since they are so small and so closely related I just dropped them in folders in one single repo. However now that I know I can't clone a single directory in my VCS of choice (Mercurial), I'm considering splitting them up. However I'm confused about general best practice: Is it okay to put different small projects in different branches, or should they all go in different repos? I'm currently leaning towards branching since I can't easily splice out the file history of the different projects but then your using a feature in a way it wasn't meant to be used.

    Read the article

  • Best practices for including open source code from other public projects?

    - by Bryan Kemp
    If I use an existing open source project that is hosted for example on github within one of my projects, should I check in the code from the other project into my public repo or not? I have mixed feelings about this, #1 I want to give proper credit and attribution to the original developer, and if appropriate I will contribute back any changes I need to make. However given that I have developed / tested against a specific revision of the other projects code, that is the version that I want to distribute to users of my project. Here is the specific use case to illustrate my point. I am looking for a more generalized answer than this specific case. I am developing simple framework using rabbitmq and python for outbound messages that will allow for sending sms, twitter, email, and is extensible to support additional messaging buses as well. There is a project on github that will make the creation and sending of SMS messages developed by another person. When I create my own repo how do I account for the code that I am including from the other project?

    Read the article

  • Headaches using distributed version control for traditional teams?

    - by J Cooper
    Though I use and like DVCS for my personal projects, and can totally see how it makes managing contributions to your project from others easier (e.g. your typical Github scenario), it seems like for a "traditional" team there could be some problems over the centralized approach employed by solutions like TFS, Perforce, etc. (By "traditional" I mean a team of developers in an office working on one project that no one person "owns", with potentially everyone touching the same code.) A couple of these problems I've foreseen on my own, but please chime in with other considerations. In a traditional system, when you try to check your change in to the server, if someone else has previously checked in a conflicting change then you are forced to merge before you can check yours in. In the DVCS model, each developer checks in their changes locally and at some point pushes to some other repo. That repo then has a branch of that file that 2 people changed. It seems that now someone must be put in charge of dealing with that situation. A designated person on the team might not have sufficient knowledge of the entire codebase to be able to handle merging all conflicts. So now an extra step has been added where someone has to approach one of those developers, tell him to pull and do the merge and then push again (or you have to build an infrastructure that automates that task). Furthermore, since DVCS tends to make working locally so convenient, it is probable that developers could accumulate a few changes in their local repos before pushing, making such conflicts more common and more complicated. Obviously if everyone on the team only works on different areas of the code, this isn't an issue. But I'm curious about the case where everyone is working on the same code. It seems like the centralized model forces conflicts to be dealt with quickly and frequently, minimizing the need to do large, painful merges or have anyone "police" the main repo. So for those of you who do use a DVCS with your team in your office, how do you handle such cases? Do you find your daily (or more likely, weekly) workflow affected negatively? Are there any other considerations I should be aware of before recommending a DVCS at my workplace?

    Read the article

  • Aggregate root & Repository dilemma

    - by mateoc
    I am in a big dilemma here. I have a League, Team and Player entities. I have created a repo for the league only as a Team cannot exists without a League. At first I had bounded the players only with the team but then I realised I would have a problem with free agents so I also bounded the players to the league. Then I was wondering if a player could exists without a League or a Team and I am totally confused to that question. So would you make a player repository or include them in the league repo? Thanks

    Read the article

  • git changing head not reflected on co-dev's branch

    - by stevekrzysiak
    Basically, we undid history. I know this is bad, and I am already committed to avoiding this at all costs in the future, but what is done is done. Anyway, I issued a git push origin <1_week_old_sha:master to undo some bad commits. I then deleted a buggered branch called release(which had also received some bad commits) from remote and then branched a new release off master. I pushed this to remote. So basically, remote master & release are clones and just how I want them. The issue is if I clone the repo anew(or work in my current repo) everything looks great....but when my co-devs delete their release branch and create a new one based off the new remote release I created, they still see all the old junk I tried to remove. I feel this has to do with some local .git files mistaking the new branch release for the old release. Any thoughts? Thanks.

    Read the article

  • Connecting to a Windows SVN server from Ubuntu

    - by skytreader
    I need to access an SVN repo hosted on a Windows machine from Ubuntu. However, even if I supply the proper credentials, it denies me access, apparently because Windows does not allow Linux connections (as they told me); sure enough, I got in when I tried to checkout from my XP partition. While I have my box dual-booted, it is inconvenient to switch just for SVN. So, does anyone know how I can access that SVN repo from Ubuntu? I've tried installing TortoiseSVN and Windows Subversion under Wine but I can't even get them to run; they were asking for some DLLs that I don't know how to supply. I've thought of installing a virtual XP just for SVN but I consider that too extreme and I'd be glad if anyone can advise a simpler workaround.

    Read the article

  • Hosting Mercurial on IIS7

    - by Lasse V. Karlsen
    Note, this might perhaps be best suited on serverfault.com, but since it is about hosting a programmer source code repository, I am not entirely sure. I'm posting here first, trusting that it'll be migrated if necessary. I'm attempting to host clones of my Mercurial repositories on my own server (I have the main repo somewhere else), and I'm attempting to set up Mercurial under IIS. I followed the guide here, but I get an error message. Solved: See bottom of this question for details. The error message is: mercurial.error.RepoError: repository /path/to/repo/or/config not found Here's what I did. I installed Mercurial 1.5.2 I created c:\inetpub\hg I downloaded the hg source as per the instructions of the webpage, and copied the hgweb.cgi file into c:\inetpub\hg (note, the webpage says hgwebdir.cgi, but this particular file does not exist, hgweb.cgi does, however, can this be the source of the problem?) I added a hgweb.config, with the following contents: [paths] repo1 = C:/hg/** [web] style = monoblue I created c:\hg, created a sub-directory test, and created a repository inside it I installed python 2.6.5, latest 2.6 version from the website (the webpage mentions I need to install the correct version or I'll get a specific error message, since I don't get an error message that looks remotely like the one mentioned, I assume that 2.6.5 is not the problem) I added a new virtual host hg.vkarlsen.no, pointing it to c:\inetpub\hg For this host, I added a script mapping under the Handler Mappings section, mapping *.cgi to c:\python26\python.exe -u %s %s as per the instructions on the website. I then tested it by navigating to http://hg.vkarlsen.no/hgweb.cgi, but I get an error message. To make it easier to test, I dropped to a command prompt, navigated to c:\inetpub\hg, and executed the following command (error message is part of the text below): C:\inetpub\hg>c:\python26\python.exe -u hgweb.cgi Traceback (most recent call last): File "hgweb.cgi", line 16, in <module> application = hgweb(config) File "mercurial\hgweb\__init__.pyc", line 12, in hgweb File "mercurial\hgweb\hgweb_mod.pyc", line 30, in __init__ File "mercurial\hg.pyc", line 82, in repository File "mercurial\localrepo.pyc", line 2221, in instance File "mercurial\localrepo.pyc", line 62, in __init__ mercurial.error.RepoError: repository /path/to/repo/or/config not found Does anyone know what I need to look at in order to fix this? Edit: Ok, I think I managed to get one step closer to the solution, but I'm still stumped. I realized the .cgi file is a python script file, and not something compile, so I opened it for editing, and these lines was sitting in it: # Path to repo or hgweb config to serve (see 'hg help hgweb') config = "/path/to/repo/or/config" So this was my source for the specific error message. If I change the line to this: config = "c:\\hg\\test" Then I can navigate the empty repository through the Mercurial web interface. However, I want to host multiple repositories, and seeing as the line says that I can also link to a hgweb config file, I tried this: config = "c:\\inetpub\\hg\\hgweb.config" But then I get the following error message: mercurial.error.Abort: c:\inetpub\hg\hgweb.config: not a Mercurial bundle file Exception ImportError: 'No module named shutil' in <bound method bundlerepository.__del__ of <mercurial.bundlerepo.bundlerepository object at 0x0260A110>> ignored Nothing I've tried for the config variable seems to work: config = "hgweb.config" config = "c:\\hg\\hgweb.config" various other variations I don't remember. So, still stumped, pointers anyone? Solved: I ended up having to edit the hgweb.cgi file: from: from mercurial.hgweb import hgweb, wsgicgi application = hgweb(config) to: from mercurial.hgweb import hgweb, hgwebdir, wsgicgi application = hgwebdir(config) Note the added hgwebdir parts there. Here's my hgweb.config file, located in the same directory as hgweb.cgi file: [collections] C:/hg/ = C:/hg/ [web] style = gitweb This now serves my repositories successfully. Hopefully this question will give others some information if they're stumped as I was.

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >