Search Results

Search found 683 results on 28 pages for 'tortoise hg'.

Page 16/28 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • Incremental deploy from a shell script

    - by WishCow
    I have a project, where I'm forced to use ftp as a means of deploying the files to the live server. I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents, deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository. (and taking care of user uploaded files and folders, and making post-deploy changes, etc) It's working well, but the project is starting to get big enough to make the deployment process too long. I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is) I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files, and upload each modified file, and delete each removed file. I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc. Two problems: I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here, if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations. hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least. So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.

    Read the article

  • How to setup and teardown temporary django db for unit testing?

    - by blokeley
    I would like to have a python module containing some unit tests that I can pass to hg bisect --command. The unit tests are testing some functionality of a django app, but I don't think I can use hg bisect --command manage.py test mytestapp because mytestapp would have to be enabled in settings.py, and the edits to settings.py would be clobbered when hg bisect updates the working directory. Therefore, I would like to know if something like the following is the best way to go: import functools, os, sys, unittest sys.path.append(path_to_myproject) os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings' def with_test_db(func): """Decorator to setup and teardown test db.""" @functools.wraps def wrapper(*args, **kwargs): try: # Set up temporary django db func(*args, **kwargs) finally: # Tear down temporary django db class TestCase(unittest.TestCase): @with_test_db def test(self): # Do some tests using the temporary django db self.fail('Mark this revision as bad.') if '__main__' == __name__: unittest.main() I should be most grateful if you could advise either: If there is a simpler way, perhaps subclassing django.test.TestCase but not editing settings.py or, if not; What the lines above that say "Set up temporary django db" and "Tear down temporary django db" should be?

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • Mercurial hgwebdir configuration URL

    - by Jonathan Sternberg
    I'm setting up an hgwebdir configuration for the first time with Mercurial on apache2. I can see the three repositories I've set up in the first page, and I've figured out how to modify their names so they don't resemble the directory path. But when I click to go to one of the repositories, the URL becomes http://localhost/hg/hgweb.cgi/path/to/repos. I would like the directory to be http://localhost/hg/name instead as that is easier to remember for people who want to clone the repository. Is there anyway to do that with hgwebdir?

    Read the article

  • Problem running mercurial against symlinked .hgrc file under Cygwin/Windows 7

    - by emptyset
    This is not a question about handling symlinks in the mercurial repository. I have this setup at work where I keep my dotfiles in a separate directory (.configuration) that I can use to synch my dotfiles between cygwin/windows and linux, then use symlinks instead of dotfiles in the home directory. So, I have the symlink ~/.hgrc -> .configuration/.hgrc in my home directory. After setting this up, mercurial complains thus: $ hg st hg: config error at C:\Users\aaf\.hgrc:1: '!<symlink>ÿþ.configuration/.hgrc' Removing the symlink and replacing it with the actual file works, so the contents of the .hgrc file are not at fault. I can live with that, I suppose, but I'd like to know why this happens. All other tools I've configured the same way work great with symlinked dotfiles.

    Read the article

  • Colour output piped to less

    - by mmacaulay
    Operating system: Mac OS 10.6.2 I'd like to be able to see colour output when piping certain commands through less. Two examples: I've got ls aliased to ls --color=auto, so I'd like to be able to see colour when I do this: ls -l | less I've also got the color extension turned on in Mercurial, so I'd like to see colour output from: hg diff | less and hg st | less After some googling, it seems like some versions of less support either -r or -R to make this work, but no dice for me. I can't see anything in the man page that looks like what I need. (-r or -R SEEM to be the right options, but again, they don't seem to work)

    Read the article

  • JDK8 New Build Infrastructure

    - by kto
    I unintentionally posted this before I verified everything, so once I have verified it all works, I'll updated this post. But this is what should work... Most Interesting Builder in the World: "I don't always build the jdk, but when I do, I prefer The New JDK8 Build Infrastructure. Stay built, my friends." So the new Build Infrastructure changes have been integrated into the jdk8/build forest along side the older Makefiles (newer in makefiles/ and older ones in make/). The default is still the older makefiles. Instructions can be found in the Build-Infra Project User Guide. The Build-Infra project's goal is to create the fastest build possible and correct many of the build issues we have been carrying around for years. I cannot take credit for much of this work, and wish to recognize the people who do so much work on this (and will probably still do more), see the New Build Infrastructure Changeset for a list of these talented and hard working JDK engineers. A big "THANK YOU" from me. Of course, every OS and system is different, and the focus has been on Linux X64 to start, Ubuntu 11.10 X64 in particular. So there are at least a base set of system packages you need. On Ubuntu 11.10 X64, you should run the following after getting into a root permissions situation (e.g. have run "sudo bash"): apt-get install aptitude aptitude update aptitude install mercurial openjdk-7-jdk rpm ssh expect tcsh csh ksh gawk g++ build-essential lesstif2-dev Then get the jdk8/build sources: hg clone http://hg.openjdk.java.net/jdk8/build jdk8-build cd jdk8-build sh ./get_source.sh Then do your build: cd common/makefiles bash ../autoconf/configure make We still have lots to do, but this is a tremendous start. -kto

    Read the article

  • Is git / tortoisegit ready to replace snv / tortoisesvn on windows?

    - by opensas
    I've been using svn thru tortoise and svn:// protocol on windows for a couple of years and I'm pretty comfortable with it. Nevertheless tThere are a couple of features I'd like to hove, mainly shelves (something like local commits) and easier branch / merge support. From my experience, svn / tortoise on windows is rock solid stuff. I was wondering if git / tortoisegit has achieved the level of maturity and stability so as to be a replacement for svn on windows. Any experience about it? saludos sas some links: a similar question, only a bit outdated http://stackoverflow.com/questions/1500400/is-tortoisegit-ready-for-prime-time-yet http://stackoverflow.com/questions/931105/tortoisegit-tortoisebzr-tortoisehg-are-any-solid-enough-to-switch-from-tortois (well, it seems like mercurial could be an option) http://code.google.com/p/tortoisegit/ http://www.syntevo.com/smartgit/index.html (free of charge for non-commercial use or to active members of the Open Source community)

    Read the article

  • TortoiseSVN trunk checkout "Server sent unexpected return value (403 Forbidden) in response to OPTIO

    - by S Bogdan
    First of all please excuse my lack of knowledge, I'm new to Tortoise, and my bad English. My problem is that every time I do some operation with an URL like the following one: https://nttt.dttt.com:8443/svn/nttt/Med/trunk I get "Server sent unexpected return value (403 Forbidden) in response to OPTIONS". The user and password I supplied was correct, so no problem there. I don't know where the problem lies, I don't know if it is the server(on witch I don't have any control) or my Tortoise client. Thanks in advance for any answer or guidance.

    Read the article

  • How to search SVN repository for a file when I'm not sure where I put it?

    - by Chris Thornton
    Co-worker is sure he checked in a file: foo_oustanding.dpr but isn't sure when/where (we have lots of "tools" and "utility" ancillary branches, lots of project branches, etc.. I need a way to search the entire repository for this file. I could check the whole source tree out to my HD, but that would take several hours. Is there a faster way? I tried the Repo Browser (Tortoise) and it didn't seem to have a search. I also thought about dumping the log, from the beginning of time. But that seemed silly. I have, at my disposal: Tortoise SVN 1.6 Subversion 1.5.6 running on Apache It runs on a Windows 2003 server. Remote Desktop access to the server, with admin rights. Thanks for any ideas.

    Read the article

  • TortoiseHg : Can commit by command line but not with contextual menu

    - by nicon
    I just installed TortoiseHg (and I'm new to mercurial). I haven't been able to execute any commit with the contextual menu from Tortoise. Every time I try, I get the following error : Commit : Abort : The system cannot find the specified file. I get the error no matter the changes in my repository : new files, modifications to existing files. I also took the time to configure tortoise as shown here : http://tortoisehg.bitbucket.org/manual/1.0/quick.html (section 3.1) The strange thing is, everything is working well when I'm doing my commit from the command line. What should I look for ?

    Read the article

  • Is it possible to create a branch from a tag in TortoiseSVN without first checking out the tag from

    - by Scott Vierregger
    Our trunk directory contains about 100mb of code and we create tags from the trunk directory. Normally, this is not an issue because a tag takes up no space until you need to use it for something. Since branches are created from tags in SVN, how can I create a branch from a tag wtihout first checking out the tag? It appears I need to do a Tortoise Update from Windows Explorer to get the tag down to my local machine before I can use Tortoise Branch/Tag... to create a branch from it. This seems illogical since we don't make changes to tag folders, and it requires that I check out 100mb of code, only to create a branch, and then check out another 100mb of code in the branch folder, where the changes will actually be made. Ideally, I'd be able to create a branch directly in the repository via RepoBrowser - but I can't see an option for it there. Am I missing something?

    Read the article

  • Are there advantages to using a DVCS for a solo developer?

    - by SnOrfus
    Right now, I use visual svn on my server, and have ankhsvn/tortoise on my personal machine. It works fine enough, and I don't have to change, but if I can see some benefits of using a DVCS, then I might give it a go. However, if there's no point or difference using it without other people, then I won't bother. So again, I ask, are there any benefits to using a DVCS when you're the only developer?

    Read the article

  • Waterfall Model (SDLC) vs. Prototyping Model

    The characters in the fable of the Tortoise and the Hare can easily be used to demonstrate the similarities and differences between the Waterfall and Prototyping software development models. This children fable is about a race between a consistently slow moving but steadfast turtle and an extremely fast but unreliable rabbit. After closely comparing each character’s attributes in correlation with both software development models, a trend seems to appear in that the Waterfall closely resembles the Tortoise in that Waterfall Model is typically a slow moving process that is broken up in to multiple sequential steps that must be executed in a standard linear pattern. The Tortoise can be quoted several times in the story saying “Slow and steady wins the race.” This is the perfect mantra for the Waterfall Model in that this model is seen as a cumbersome and slow moving. Waterfall Model Phases Requirement Analysis & Definition This phase focuses on defining requirements for a project that is to be developed and determining if the project is even feasible. Requirements are collected by analyzing existing systems and functionality in correlation with the needs of the business and the desires of the end users. The desired output for this phase is a list of specific requirements from the business that are to be designed and implemented in the subsequent steps. In addition this phase is used to determine if any value will be gained by completing the project. System Design This phase focuses primarily on the actual architectural design of a system, and how it will interact within itself and with other existing applications. Projects at this level should be viewed at a high level so that actual implementation details are decided in the implementation phase. However major environmental decision like hardware and platform decision are typically decided in this phase. Furthermore the basic goal of this phase is to design an application at the system level in those classes, interfaces, and interactions are defined. Additionally decisions about scalability, distribution and reliability should also be considered for all decisions. The desired output for this phase is a functional  design document that states all of the architectural decisions that have been made in regards to the project as well as a diagrams like a sequence and class diagrams. Software Design This phase focuses primarily on the refining of the decisions found in the functional design document. Classes and interfaces are further broken down in to logical modules based on the interfaces and interactions previously indicated. The output of this phase is a formal design document. Implementation / Coding This phase focuses primarily on implementing the previously defined modules in to units of code. These units are developed independently are intergraded as the system is put together as part of a whole system. Software Integration & Verification This phase primarily focuses on testing each of the units of code developed as well as testing the system as a whole. There are basic types of testing at this phase and they include: Unit Test and Integration Test. Unit Test are built to test the functionality of a code unit to ensure that it preforms its desired task. Integration testing test the system as a whole because it focuses on results of combining specific units of code and validating it against expected results. The output of this phase is a test plan that includes test with expected results and actual results. System Verification This phase primarily focuses on testing the system as a whole in regards to the list of project requirements and desired operating environment. Operation & Maintenance his phase primarily focuses on handing off the competed project over to the customer so that they can verify that all of their requirements have been met based on their original requirements. This phase will also validate the correctness of their requirements and if any changed need to be made. In addition, any problems not resolved in the previous phase will be handled in this section. The Waterfall Model’s linear and sequential methodology does offer a project certain advantages and disadvantages. Advantages of the Waterfall Model Simplistic to implement and execute for projects and/or company wide Limited demand on resources Large emphasis on documentation Disadvantages of the Waterfall Model Completed phases cannot be revisited regardless if issues arise within a project Accurate requirement are never gather prior to the completion of the requirement phase due to the lack of clarification in regards to client’s desires. Small changes or errors that arise in applications may cause additional problems The client cannot change any requirements once the requirements phase has been completed leaving them no options for changes as they see their requirements changes as the customers desires change. Excess documentation Phases are cumbersome and slow moving Learn more about the Major Process in the Sofware Development Life Cycle and Waterfall Model. Conversely, the Hare shares similar traits with the prototyping software development model in that ideas are rapidly converted to basic working examples and subsequent changes are made to quickly align the project with customers desires as they are formulated and as software strays from the customers vision. The basic concept of prototyping is to eliminate the use of well-defined project requirements. Projects are allowed to grow as the customer needs and request grow. Projects are initially designed according to basic requirements and are refined as requirement become more refined. This process allows customer to feel their way around the application to ensure that they are developing exactly what they want in the application This model also works well for determining the feasibility of certain approaches in regards to an application. Prototypes allow for quickly developing examples of implementing specific functionality based on certain techniques. Advantages of Prototyping Active participation from users and customers Allows customers to change their mind in specifying requirements Customers get a better understanding of the system as it is developed Earlier bug/error detection Promotes communication with customers Prototype could be used as final production Reduced time needed to develop applications compared to the Waterfall method Disadvantages of Prototyping Promotes constantly redefining project requirements that cause major system rewrites Potential for increased complexity of a system as scope of the system expands Customer could believe the prototype as the working version. Implementation compromises could increase the complexity when applying updates and or application fixes When companies trying to decide between the Waterfall model and Prototype model they need to evaluate the benefits and disadvantages for both models. Typically smaller companies or projects that have major time constraints typically head for more of a Prototype model approach because it can reduce the time needed to complete the project because there is more of a focus on building a project and less on defining requirements and scope prior to the start of a project. On the other hand, Companies with well-defined requirements and time allowed to generate proper documentation should steer towards more of a waterfall model because they are in a position to obtain clarified requirements and have to design and optimal solution prior to the start of coding on a project.

    Read the article

  • See number of SVN Checkins per folder

    - by Farseeker
    I have a very large SVN repository (working copy of several gb) that has just reached its 20,000th checkin. As a bit of an interesting statistic for our team (and to partly celebrate our 20,000th checkin) I'd like to make a graph showing which folders in the repository have had the most checkins. Is there any way to do this? We mostly use integrated SVN clients in our IDE and Tortoise SVN, but I'm willing to get other tools for this one-off thing.

    Read the article

  • Mercurial hook to disallow committing large binary files

    - by hekevintran
    I want to have a Mercurial hook that will run before committing a transaction that will abort the transaction if a binary file being committed is greater than 1 megabyte. I found the following code which works fine except for one problem. If my changeset involves removing a file, this hook will throw an exception. The hook (I'm using pretxncommit = python:checksize.newbinsize): from mercurial import context, util from mercurial.i18n import _ import mercurial.node as dpynode '''hooks to forbid adding binary file over a given size Ensure the PYTHONPATH is pointing where hg_checksize.py is and setup your repo .hg/hgrc like this: [hooks] pretxncommit = python:checksize.newbinsize pretxnchangegroup = python:checksize.newbinsize preoutgoing = python:checksize.nopull [limits] maxnewbinsize = 10240 ''' def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) tip = context.changectx(repo, 'tip').rev() ctx = context.changectx(repo, node) for rev in range(ctx.rev(), tip+1): ctx = context.changectx(repo, rev) print ctx.files() for f in ctx.files(): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid The exception: $ hg commit -m 'commit message' error: pretxncommit hook raised an exception: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest transaction abort! rollback completed abort: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest! I'm not familiar with writing Mercurial hooks, so I'm pretty confused about what's going on. Why does the hook care that a file was removed if hg already knows about it? Is there a way to fix this hook so that it works all the time? Update (solved): I modified the hook to filter out files that were removed in the changeset. def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) ctx = repo[node] for rev in xrange(ctx.rev(), len(repo)): ctx = context.changectx(repo, rev) # do not check the size of files that have been removed # files that have been removed do not have filecontexts # to test for whether a file was removed, test for the existence of a filecontext filecontexts = list(ctx) def file_was_removed(f): """Returns True if the file was removed""" if f not in filecontexts: return True else: return False for f in itertools.ifilterfalse(file_was_removed, ctx.files()): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid

    Read the article

  • Using sub-repo with hgwebdir difficulties in mercurial

    - by Ton
    Allright I got myself in a deadlock with Mercurial and sub-repos... Here's what happenend: I had a large mercurial repo that I server via apache and hgweb.cgi. Due to the size of the repo I decided to move to sub-repositories and share these with hgwebdir.cgi. Using the convert tool with the filemap option I created several sub-repositories: /main/foo /main/bar Nicely created an entry for the sub-repositories in .hgsub: foo = foo bar = bar And set hgwebdir.cgi up to show $/** as the root folder. Now when I went to my site (foo.com/hg) I saw my sub-repositories with one empty reposory among them (no name, no content), but I could not download it (archive location unknown): That was allright until I added a new sub-repository. I could not push the new .hgsub file to foo.com/hg, since that page is served by hgwebdir. The only method I can work currently is switch from hgwebdir to hgweb, commit .hgsubste and switch back to hgwebdir. Does someone have a good setup for such a mess?

    Read the article

  • XAMPP Mercurial installation on Windows Apache --> HgWebDir.cgi Script Error

    - by Tim
    I try to publish multiple existing mercurial repository-locations though XAMPP Apache via CGI Python script hgwebdir.cgi ... as in this tutorial http://mercurial.selenic.com/wiki/HgWebDirStepByStep I get the following error from the apache error logs, when I try to access the repository path with a browser: Premature end of script headers: hgwebdir.cgi [Tue Apr 20 16:00:50 2010] [error] [client 91.67.44.216] Premature end of script headers: hgwebdir.cgi [Tue Apr 20 16:00:50 2010] [error] [client 91.67.44.216] File "C:/hostdir/xampp/cgi-bin/hg/hgwebdir.cgi", line 39\r [Tue Apr 20 16:00:50 2010] [error] [client 91.67.44.216] test = c:/hostdir/mercurial/test/\r [Tue Apr 20 16:00:50 2010] [error] [client 91.67.44.216] ^\r [Tue Apr 20 16:00:50 2010] [error] [client 91.67.44.216] SyntaxError: invalid syntax\r This is the path of the file where the script fails (and if I remove it, I get an empty HTML page shown with no visual elements in it): [paths] test = c:/hostdir/mercurial/test/ /hg = c:/hostdir/mercurial/** / = c:/hostdir/mercurial/ Does anybody have a clue for me?

    Read the article

  • How many repositories should I use to maintain my scripts under version control?

    - by romandas
    I mainly code small programs for myself, but recently, I've been starting to code for my peers on my team. To that end, I've started using a Mercurial repository to maintain my code in some form of version control (specifically, Tortoise-Hg on Windows). I have many small scripts, each in their own directory, all under one repository. However, while reading Joel's Hg Tutorial, I tried cloning a directory for one of my bigger scripts to create a "stable" version and found I couldn't do it because the directory wasn't itself a repository. So, I assume (and please correct me if I'm mistaken) that in order to use cloning properly, I'd have to create a repository for each script/directory. But.. would that be a "good idea" or a future maintenance nightmare waiting to happen? Succinctly, do I keep all my (unrelated) scripts in one repository, or should I create a repository for each? Or some unknown third option?

    Read the article

  • Colour output piped to less

    - by mmacaulay
    Operating system: Mac OS 10.6.2 I'd like to be able to see colour output when piping certain commands through less. Two examples: I've got ls aliased to ls --color=auto, so I'd like to be able to see colour when I do this: ls -l | less I've also got the color extension turned on in Mercurial, so I'd like to see colour output from: hg diff | less and hg st | less After some googling, it seems like some versions of less support either -r or -R to make this work, but no dice for me. I can't see anything in the man page that looks like what I need. (-r or -R SEEM to be the right options, but again, they don't seem to work)

    Read the article

  • Mercurial: can't host on BitBucket.org with an error SSH, OpenSSH?

    - by raychenon
    For a new project, I created a new repo inside the project's folder. This is the first time I see this error. Following this guide 3.6 Share the repository http://tortoisehg.bitbucket.org/manual/1.0/quick.html In destination path : https://bitbucket.org/$myaccount/$myrepo I get this: abort: cannot create new http repository [command interrupted] In command line I do the equivalent: hg push https://bitbucket.org/$myaccount/$myrepo error SSH-2.0-OpenSSH_5.3 Previously I cloned a HG project on bitbucket.org with no problem. I changed without any results in the Global Settings Proxy Host : https://bitbucket.org/$myaccount user : password :

    Read the article

  • php proxy to local mercurial server

    - by naugtur
    I was wondering is it possible to create a php proxy to a server that listens onlu locally so that the php gateway is public and it directs everything to the server listening on localhost. This server would be mercurial's hg serve that listens only on 127.0.0.1 and php will do the authentication. Do You think it's possible to do? Anybody has an idea how to make a general proxy in php so that not only http get works, but also hg push? I know there are ways to host mercurial repo with auth, but it's on a plug computer and I don't have a lot of space for more apps etc.

    Read the article

  • set current user in asp.net mvc

    - by Tomh
    Hey guys, I'm not sure if this is the best way to do it, but I want to keep a user object alive during all requests of the current user. From reading several resources I learned that you should create your own IPrinciple which holds this. But I don't want to trigger the database every authentication request. Any recommendations on how to handle this? Is caching the db request a good idea? protected void Application_AuthenticateRequest(Object sender, EventArgs e) { HttpCookie authCookie = Request.Cookies[FormsAuthentication.FormsCookieName]; if (authCookie != null) { FormsAuthenticationTicket authTicket = FormsAuthentication.Decrypt(authCookie.Value); User user; using (HgDataContext hg = new HgDataContext()) { if (Session["user"] != null) { user = (from u in hg.Users where u.EmailAddress == authTicket.Name select u).Single(); } else { user = Session["user"] as User; } } var principal = new HgPrincipal(user); Context.User = principal; } }

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >