Search Results

Search found 2798 results on 112 pages for 'pull'.

Page 11/112 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • Windows 7 Ult machine can see XP Pro but not vice versa

    - by Chadworthington
    My new Windows 7 Ult. PC can attach to my work laptop and pull files but my work laptop cannot find my Windows 7 on the network. Is there some sort of limitation going on here? Before I got the new XP machine, my old XP Pro PC could pull files from the XP Pro laptop but not vice versa. The common thread seems to be that the work laptop cannot see other PCs, Windows 7 or not. Could it be because that PC is on a work domain? When I pull files from the work PC, I am prompted for domain credentials, which I provide.

    Read the article

  • Why is my nginx alias not working?

    - by Rob
    I'm trying to set up an alias so when someone accesses /phpmyadmin/, nginx will pull it from /home/phpmyadmin/ rather than from the usual document root. However, everytime I pull up the URL, it gives me a 404 on all items not pulled through fastcgi. fastcgi seems to be working fine, whereas the rest is not. strace is telling me it's trying to pull everything else from the usual document root, yet I can't figure out why. Can anyone provide some insight? Here is the relevant part of my config: location ~ ^/phpmyadmin/(.+\.php)$ { include fcgi.conf; fastcgi_index index.php; fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_param SCRIPT_FILENAME /home$fastcgi_script_name; } location /phpmyadmin { alias /home/phpmyadmin/; }

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • Continuous Integration using Docker

    - by Leon Mergen
    One of the main advantages of Docker is the isolated environment it brings, and I want to leverage that advantage in my continuous integration workflow. A "normal" CI workflow goes something like this: Poll repository for changes Pull from repository Install dependencies Run tests In a Dockerized workflow, it would be something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Run tests Kill docker container My problem is with the "run tests" step: since Docker is an isolated environment, intuitively I would like to treat it as one; this means the preferred method of communication are sockets. However, this only works well in certain situations (a webapp, for example). When testing different kind of services (for example, a background service that only communicated with a database), a different approach would be required. What is the best way to approach this problem? Is it a problem with my application's design, and should I design it in a more TDD, service-oriented way that always listens on some socket? Or should I just give up on isolation, and do something like this: Poll repository for changes Pull from repository Build docker image Run docker image as container Open SSH session into container Run tests Kill docker container SSH'ing into the container seems like an ugly solution to me, since it requires deep knowledge of the contents of the container, and thus break the isolation. I would love to hear SO's different approaches to this problem.

    Read the article

  • Visual Basic link to SQL output to Word

    - by CLO_471
    I am in need of some advice/references. I am currently trying to develop a legal document interface. There are certain fields in which I need to query out of my sql db and have those fields output into a document that can be printed. I am trying to develop a user interface where people can enter fields that will output to a document template but at the same time I need the template to be able to pull data from the SQL database. This is the reason why I think that VB might be my best choice and because it is one of the only OOP languages I am familiar with presently. Does anyone know that best way to be able to handle this type of job?? I know that you can use VBA within MS Word and have the form output variables to a word template. But, is there a way to have the word document also pull information from the SQL db? Is the best option to use VB linked to SQL and run queries to get the information from the database and then have it output to a for within VB? Is it possible for VB to be linked to a SQL db and output variables and SQL fields to a Word Template? I have looked into Mail Merge and I see that it allows users to pull data from an Access query but I dont think it would be easy to automate and it seems that users would need to have an advanced knowledge of MS Word and Access to handle this. I am not finding much useful information online so I came here. Any advice or references would be greatly appreciated. If there is a better way please let me know.

    Read the article

  • Release Notes for 6/14/2012

    Here are the notes for this week’s release: Diffs in Pull Requests and Commits We altered the way we display diffs across commits and pull requests to maximize the amount of vertical real estate devoted to the diff. Before, the viewport for diffs was always snapped to the height of the browser, which meant that on lower resolutions, the amount of space for viewing diffs could become very tiny. Now, the majority of the browser vertical space is devoted to viewing the diffs. Let us know what you think! Bug Fixes Fixed an issue where returning to the list of files changed from a diff would sometimes not show the list of files. Fixed the dialogs for approving and denying requests to join projects. Fixed various issues around validation of project details when publishing a project. Fixed an issue that caused the formatting of our tabs in pull requests to not display properly. Fixed an issue where users browsing Unicode files in a Git project would see error pages. Fixed various issues where the option to subscribe to notifications would not appear properly. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • How to remove accidental branch in TortoiseHg?

    - by msorens
    (I am a relative newcomer to TortoiseHg, so bear with me :-) I use TortoiseHg on two machines to talk to my remote source repository. I made changes on one machine, committed them, and attempted to push them to the remote repository BUT I forgot to first do a pull to get the latest code first. The push gave me a few lines of output, suggesting I may have forgotten to pull first (true!) and mentioned something like "abort: push creates new remote branches...". So I did a pull, which added several nodes to the head of my graph in the repository explorer. The problem is that the push I tried to do is now showing as a branch in the repository explorer. Looking from the server side (codeplex), it shows no sign of my attempted push, indicating this accidental branch is still local on my machine. How could I remove this accidental branch? I tried selecting that node in the graph then doing "revert" but it did not seem to do anything. I am wondering if it would be simplest to just discard my directory tree on my local machine and do a completely new, clean pull from the server...?

    Read the article

  • Amazon Product API ResponseGroups and Default results

    - by aboxy
    A. In our application, most of the data we work with is stored as free text .i.e. there is no categorization done as of now. We are using openNLP libraries to make sense of the data(extract keywords/classify) and do a query to Amazon web services to pull the results of the query. We use searchindex=All and keywords=. Results are not always returned and we basically get 'AWS.ECommerceService.NoExactMatches' How to avoid that? 1) Is there a way to specify default results if no match found? e.g. Amazon carousel widget does that if the search query did not return results, it basically show some computer items. 2) Should I batch the request always and add another search criteria to every request? If my first criteria does not pull any results, we can be sure that our 2nd query will always pull results(possibly caching?) Here is one search criteria 'Open Circle Hoop Earrings Polished Stainless Steel Open Circle Hoop Earrings Polished Stainless Steel DiamondShark' This return no results via API. On Amazon site,I get alternative suggestions with some results which are pretty relevant. Is there a way to pull those results? B. We just need a thumbnail image and a title and description for our app. Which responseGroup is appropriate? We are using medium rt now but there is awful lot of information even with that responseGroup. Any help is appreciated. thanks

    Read the article

  • Ant: simplest way to copy over a relative list of files

    - by Derek Illchuk
    Using Ant, I want to copy a list of files from one project to another, where each project has the same directory structure. Is there a way to get the following to work? <project name="WordSlug" default="pull" basedir="."> <description> WordSlug: pull needed files </description> <property name="prontiso_home" location="../../prontiso/trunk"/> <!-- I know this doesn't work, what's the missing piece? --> <target name="pull" description="Pull needed files"> <copy todir="." overwrite="true"> <resources> <file file="${prontiso_home}/application/views/scripts/error/error.phtml"/> <file file="${prontiso_home}/application/controllers/CacheController.php"/> <!-- etc. --> </resources> </copy> </target> </project> Success is deriving the paths automatically: ${prontiso_home}/application/views/scripts/error/error.phtml copied to ./application/views/scripts/error/error.phtml ${prontiso_home}/application/controllers/CacheController.php copied to ./application/controllers/CacheController.php Thanks!

    Read the article

  • Keeping third-party libraries under a Mercurial project: Sub-repos or not?

    - by fraktal
    Hello, We are developing a closed-source project, versionned with Mercurial. We are using two libraries in our project : One of those libraries is being developed by a third-party. They are using git, and we usually just pull from their repo once in a week to get the latest changes. The other library is being developed by ourselves, and is under active development. It must live in its own public mercurial repository, as it is licensed under LGPL. (It's a fork of a third-party LGPL component, ported to our platform) So my question is: How should I organize the source to ensure that: A developer from our team should be able to get all the source (main project + libraries) with a single "clone" command We should be able to pull easily the latest changes from the libraries, even though one of them is managed by git Should we use mercurial sub-repos functionnality, with hg-git to access to the library under git? Is it well supported by TortoiseHg and BitBucket? (pros: easy to pull library changes / cons: does it works well?) Or should we keep only snapshots of the libraries under our project? (thus, when there are new upstream changes in the libraries, we pull them to a separate place, and then copy the whole source to our project? (pros: will work / cons: pain in the ass, especially for the library that is being developed by ourselves, which is subject to a lot of daily changes)

    Read the article

  • Python threading question (Working with a method that blocks forever)

    - by Nix
    I am trying to wrap a thread around some receiving logic in python. Basically we have an app, that will have a thread in the background polling for messages, the problem I ran into is that piece that actually pulls the messages waits forever for a message. Making it impossible to terminate... I ended up wrapping the pull in another thread, but I wanted to make sure there wasn't a better way to do it. Original code: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() #do other stuff... class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): #stop is a flag that i use to stop the thread... while(not stopped ): #can never stop because pull below blocks message = receiver.pull() print "Message" + message What I refectored to: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): pullThread = PullThread(self.receiver) pullThread.start() #stop is a flag that i use to stop the thread... while(not stopped and pullThread.last_message ==None): pass message = pullThread.last_message print "Message" + message class PullThread(Thread): last_message = None def __init__(self, receiver): Thread.__init(self, target=get_message, args=(receiver)) def get_message(self, receiver): self.last_message = None self.last_message = receiver.pull() return self.last_message I know the obvious locking issues exist, but is this the appropriate way to control a receive thread that waits forever for a message? One thing I did notice was this thing eats 100% cpu while waiting for a message... **If you need to see the stopping logic please let me know and I will post.

    Read the article

  • Laptop Hard Drive Cable

    - by Josh Pennington
    I am about to have to dig into my wifes laptop to pull the hard drive out to pull the data off since the rest of it is close to dieing. I am curious what I should expect to have to deal with to get data off the hard drive. I am assuming its a SATA drive. Do I need a special cable to connect a laptop SATA hard drive to the computer or should I be able to connect it to my desktop computer without issue? Thanks Josh Pennington

    Read the article

  • Rackspace Cloud Sites API (Not Cloud Servers)

    - by Jeff
    I'm looking for a way to pull data from my Rackspace Cloud SITES account. The data I want to pull is bandwidth, diskspace, and compute cycles (all available from control panel). I'd like to set up my own warning system, to be notified if I'm close to my limits on any given month. Does anyone know of a way/API to do this?

    Read the article

  • How to deploy code using git

    - by luckytaxi
    I have a bare repository initialize on my webserver. I code on my workstation and I'm able to push/pull/commit changes to it. Now, I want to deploy this code on the webserver but into my apache directory (/var/www/html). If I wanted to track the changes (only pull no push) from this directory, should I clone the repo first? This way I can make changes locally, then commit and push to "central repo" and then pull from the documentroot folder to see the changes in "prod."

    Read the article

  • What user runs the git hook?

    - by Jasie
    I have a post-update hook on my server, such that when I git push it does a pull on the live web directory. However, while the push always succeeds, the post-update hook sometimes fails. The hook is pretty simple: #!/bin/sh # # An example hook script to prepare a packed repository for use over # dumb transports. # # To enable this hook, rename this file to "post-update". cd /var/www env -i git pull I'm pushing updates from a variety of places, but sometimes I have to login as root on the server and manuall do a env -i git pull I only have to do it 20% of the time though. Any ideas why it would fail randomly? Thanks!

    Read the article

  • Pulling data from a text file to generate a report

    - by Edmond
    Have a program in Access, using VBA. I need to come up with an If statement to pull data from a text file. The data is a list of procedures and prices. I have to pull the prices from the text file to show in a report how much each procedure costs. ID PID M1 M2 M3 Total 1 11120390(procedure) 2 180(price) 360 180 540 1080(total Price) 3 2 1 3 6(Units sold) 4 5 200(Price) 200 600 800 1600(total price) 6 1 3 4 8(Units Sold) 7 11120390(procedure) The table in the text file is setup like this and I need to Pull the procedure number and the price of each procedure from the text file.

    Read the article

  • Mirror a git repository by pulling?

    - by corydoras
    I am wondering if there is an easy way, ie like a simple cron job, to regularly pull from a remote git repository to a local read only mirror for backup purposes? Ideally it would pull all branches and tags, but the master/trunk/head would be sufficient. I just need a way to make sure that if the master git server dies, we have a backup location that we could manually fail over to. (I have been googling and reading documentation for help on how to do this for quite some time now and the furthest I have gotten is a bash script that does pull's on a regular interval.)

    Read the article

  • git: rename remote branch

    - by Albert
    I have the branch master which tracks the remote branch origin/master. I want to rename them to "master-old" both locally and remote. Is that possible? For other users who tracked origin/master (and who updated their local master branch always just via 'git pull'), what whould happen after I renamed the renamed the remote branch. Would their 'git pull' still work or would it throw an error that it coudln't find origin/master anymore? Then, further on, I want to create a new master branch (both locally and remote). Again, after I did this, what would happen now if the other users do the 'git pull' now? I guess all this would result in a lot of trouble. Is there a clean way to get what I want? Or should I just leave master as it is and create a new branch master-new and just work there further on?

    Read the article

  • Git: What is a tracking branch?

    - by jerhinesmith
    Can someone explain a "tracking branch" as it applies to git? Here's the definition from git-scm.com: A 'tracking branch' in Git is a local branch that is connected to a remote branch. When you push and pull on that branch, it automatically pushes and pulls to the remote branch that it is connected with. Use this if you always pull from the same upstream branch into the new branch, and if you don't want to use "git pull" explicitly. Unfortunately, being new to git and coming from SVN, that definition makes absolutely no sense to me. I'm reading through "The Pragmatic Guide to Git" (great book, by the way), and they seem to suggest that tracking branches are a good thing and that after creating your first remote (origin, in this case), you should set up your master branch to be a tracking branch, but it unfortunately doesn't cover why a tracking branch is a good thing or what benefits you get by setting up your master branch to be a tracking branch of your origin repository. Can someone please enlighten me (in English)?

    Read the article

  • error in pulling trace file -sdcard

    - by con_9
    hi ,Can anybody help me out with traceview! I created sdcard and mounted it on my emulator , after closing my app , when i try to pull to copy the files (using command:adb pull /sdcard/calc.trace /tmp)i get this error:"remote object /sdcard/calc.trace does not exist".I am listin gdown the commands in sequence. 1)f:mksdcard 1024 ./myimage 2)f:emulator -sdcard ./myimage -avd 1 3)(After running my app and exiting the app)adb pull /sdcard/calc.trace /tmp in my activity i have start/stop method tracing with fiename calc.trace. n ya i am running windows --"\" regards

    Read the article

  • How can I remove my last commit in my local git repository

    - by michael
    Hi, This is the output of my 'git log'. But when I do a 'git pull' , the top commit causes conflict. So I did a 'git rebase -abort' commit 7826b25db424b95bae9105027edb7dcbf94d6e65 commit 5d1970105e8fd2c7b30c232661b93f1bcd00bc96 But my question is Can I 'save' my commit to a patch and then do a git pull? Just like I want to emulate * I did not do a git commit, but I did a 'git stash' instead * Do a git pull so that I should not get any merge error So I need to somehow 'turn back the clock'. Is that possible in git?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >