Search Results

Search found 686 results on 28 pages for 'hostile fork'.

Page 19/28 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to force rebase when same changes applied to both branches manually?

    - by Dmitry
    My repository looks like: X - Y- A - B - C - D - E branch:master \ \ \ \ merge master -> release \ \ M --- BCDE --- N branch:release Here "M - BCDE - N" are manually (unfortunately!) applied changes approximately same as separate commits "A - B - C - D - E" (but seems GIT does not know that these changes are the same). I'd like to rebase and get the following structure: X - Y- A - B - C - D - E branch:master \ * branch:release I.e. I want to make branch:release to be exactly the same as branch:master and fork it from the master's HEAD. But when I run "git rebase master" sitting at the branch release, GIT reports about lots of conflicts and refuces rebasing. How could I solve this? Other explaination of this: I'd like to "re-create" branch:release from scratch from master's HEAD. And there are a lot of other people who had already made "git pull" for the branch:release, so I cannot use git reset + git push -f.

    Read the article

  • How do I branch an individual file in SVN?

    - by Michael Carman
    The subversion concept of branching appears to be focused on creating an [un]stable fork of the entire repository on which to do development. Is there a mechanism for creating branches of individual files? For a use case, think of a common header (*.h) file that has multiple platform-specific source (*.c) implementations. This type of branch is a permanent one. All of these branches would see ongoing development with occasional cross-branch merging. This is in sharp contrast to unstable development/stable release branches which generally have a finite lifespan. I do not want to branch the entire repository (cheap or not) as it would create an unreasonable amount of maintenance to continuously merge between the trunk and all the branches. At present I'm using ClearCase, which has a different concept of branching that makes this easy. I've been asked to consider transitioning to SVN but this paradigm difference is important. I'm much more concerned about being able to easily create alternate versions for individual files than about things like cutting a stable release branch.

    Read the article

  • Strategy for developing namespaced and non-namespaced versions of same PHP code

    - by porneL
    I'm maintaining library written for PHP 5.2 and I'd like to create PHP 5.3-namespaced version of it. However, I'd also keep non-namespaced version up to date until PHP 5.3 becomes so old, that even Debian stable ships it ;) I've got rather clean code, about 80 classes following Project_Directory_Filename naming scheme (I'd change them to \Project\Directory\Filename of course) and only few functions and constants (also prefixed with project name). Question is: what's the best way to develop namespaced and non-namespaced versions in parallel? Should I just create fork in repository and keep merging changes between branches? Are there cases where backslash-sprinkled code becomes hard to merge? Should I write script that converts 5.2 version to 5.3 or vice-versa? Should I use PHP tokenizer? sed? C preprocessor? Is there a better way to use namespaces where available and keep backwards compatibility with older PHP?

    Read the article

  • what does this code do?

    - by bstullkid
    It looks like this just sends a ping, but whats the point of that when you can just use ping? int main (int argc, char *argv[]) { unsigned int pid = 0; char buffer[2]; char *args[] = { "/bin/ping", "-c", "5", NULL, NULL }; if (argc != 2) return 0; args[3] = strdup(argv[1]); for (;;) { gets(buffer); /* FTW */ if (buffer[0] == 0x6e) break; switch (pid = fork()) { case -1: printf("Error Forking\n"); exit(255); case 0: execvp(args[0], args); exit(1); default: break; } } return 255; }

    Read the article

  • Stopping httpd causes a process started from perl CGI script to receive SIGTERM

    - by Pranav Pal
    I am running a shell script from a perl CGI script: #!/usr/bin/perl my $command = "./script.sh &"; my $pid = fork(); if (defined($pid) && $pid==0) { # background process system( $command ); } The shell script looks like this: #!/bin/sh trap 'echo trapped' 15 tail -f test.log When I run the CGI script from browser, and then stop httpd using /etc/init.d/httpd stop, the script receives a SIGTERM signal. I was expecting the script to run as a separate process and not be tied in anyway to httpd. Though I can trap the SIGTERM, I would like to understand why the script is receiving SIGTERM at all. What wrong am I doing here? I am running RHEL 5.8 and Apache HTTP server 2.4. Thanks, Pranav

    Read the article

  • Any open source hosting site for abandoned projects?

    - by ssg
    I have some projects which I have ceased their development a long time ago but still get code access requests for. I'm currently providing zipped packages from my personal web site. I think zipped packages are far from being useful (e.g. can't read code right away, can't provide url's to individual source files, can't fork easily, lifetime is dependent on my own web page's). I want that archaic code to be present on the net regardless I keep my web page up or not. I saw the question "What's the best open source hosting site?". However, most sites request the project "to be active", Codeplex for instance. I didn't go through EULA's of all providers to see if they allow abandoned projects. Are there elephant's graveyards for old code without activity restrictions? Which one would you pick, why?

    Read the article

  • Kohana -- Command Line

    - by swt83
    I'm trying to "faux-fork" a process (an email being sent via SMTP) in my web application, and the application is built on Kohana. $command = 'test/email'; exec('php index.php '.$command.' > /dev/null/ &', $errors, $response); I'm getting an error -- Notice: Undefined index: SERVER_NAME When I look into Kohana's index.php file, I see that it is looking for a variable named SERVER_NAME, but I guess it is coming up NULL because Kohana couldn't detect this value and set it prior to run. Any ideas how to get Kohana to run via command line?

    Read the article

  • Making an ANT Macro more reusable

    - by 1ndivisible
    I have a simple macro (simplified version below). At the moment it assumes that there will be a single value for a single argument, however there might be multiple values for that argument. How can I pass in 0+ values for that argument so that the macro is usable in situations where I need to pass in 0+ values for that argument, not just a single value <macrodef name="test"> <attribute name="target.dir" /> <attribute name="arg.value" /> <sequential> <java jar="${some.jar}" dir="@{target.dir}" fork="true" failonerror="true"> <arg value="-someargname=@{arg.value}"/> </java> </sequential> </macrodef>

    Read the article

  • Manipulating source packages from Hackage how to easy deploy to several windowsboxes?

    - by Jonke
    Recently when I have found good sources packages for ghc 6.12/6.10 on Hackage I've been forced to do some minor or major changes to the cabal files to make those packages to work under windows. Besides to fork and merge my fixes with github, what seems to be the best way/ good enough practice to take these modified builds to a couple of other windows boxes that only has a basic haskell platform installed? I should prefer if I somehow could work with the cabal-install because that is what one normally use. Should one put the modfied build dirs on a shared/networked dir and mount from the targeted windows box? Say something like this: on machine prepare cabal fetch foo cabal unpack foo cd foo edit .cabal and .hs cabal configure cabal build On machine useanddevelopnormal cd machinepreparemount cd foo cabal install

    Read the article

  • Github svn interface

    - by Fabio
    I recently moved a project on github and I still use this library in some svn projects as external. However I can't get github svn interface working at this time. If I run svn list http://svn.github.com/fabn/zle.git I obtain this error svn: Server sent unexpected return value (500 Internal Server Error) in response to PROPFIND request for '/fabn/zle.git/!svn/bc/0' If I run the same command using a fork of my project the list (and also the checkout) goes fine and i obtain what I need (this is the command svn list http://svn.github.com/JellyBelly/zle.git) Is there anything I can do to resolve this issue or it's a github problem?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • execute java class with ant

    - by cateof
    I want my ant script to execute the command java -cp libs/a.jar:libs/b.jar org.stack.class1 --package pName --out classes new.wsdl How can I do it with an Ant script? The following does not work <?xml version="1.0" encoding="UTF-8"?> project name="class" default="compile"> <target name="compile"> <java classname="org.stack.class1" fork="true"> <classpath> <pathelement location="libs/a.jar"/> <pathelement location="libs/b.jar"/> </classpath> <arg value="--package pName --out classes new.wsdl"/> </java> </target>

    Read the article

  • Linux Kernel - Adding field to task_struct

    - by Drex
    I'm playing around with the linux kernel and added a struct field to the task_struct in sched.h. I know that can be costly but my struct is very small. I then initialize the new struct in INIT_TASK() and also re-initialize in fork.c copy_process() function so that when the INIT task or any other task creates a new process the process gets the init values. What then happens is when I try to run the kernel I get a SEGFAULT. The gdb error is: Locating the bottom of the address space ... Program received signal SIGSEGV, Segmentation fault. 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 31 n = *address; It looks like it fails out in task_size. Is there anything else I need to do to add a field to task_struct?

    Read the article

  • How do I make a background thread in Java that allows the main application to exit completely? This

    - by Bob
    I have a Java application that creates a new thread to do some work. I can launch the new thread with no problems. When the "main" program terminates, I want the thread I created to keep running - which it does... But the problem is, when I run the main application from Eclipse or from Ant under Windows, control doesn't return unless the background process is killed. If I fork the main java process in ant, I want control to return to ant once the main thread is done with its work... But as it is, ant continues to wait until both the main process and the created thread are both terminated. How do I launch the thread in the background such that control will return to ant when the "main" application is finished? (By the way, when I run the same application under Linux, I am able to do this with no problems).

    Read the article

  • Advice for keeping large C++ project modular?

    - by Jay
    Our team is moving into much larger projects in size, many of which use several open source projects within them. Any advice or best practices to keep libraries and dependancies relatively modular and easily upgradable when new releases for them are out? To put it another way, lets say you make a program that is a fork of an open source project. As both projects grow, what is the easiest way to maintain and share updates to the core? Advice regarding what I'm asking only please...I don't need "well you should do this instead" or "why are you"..thanks.

    Read the article

  • SBT equivalent of Ant target

    - by adelbertc
    What is the SBT equivalent (if any) of Ant targets? For example, a snippet in a build.xml file for Ant would be: <target name="runClient" description="run client"> <java classname="client.Client" fork="true"> <jvmarg value="-Djava.rmi.server.codebase=${client_web_codebase}"/> <jvmarg value="-Djava.security.policy=policy"/> <arg value="localhost"/> <classpath> <pathelement location="dist/client.jar"/> </classpath> </java> </target> And then I would do something like ant runClient to launch the application "client.Client" with the jvmargs specified in the XML.. is there an SBT equivalent, or a way for SBT to hook into Ant to do this?

    Read the article

  • how to deploy web application directly from git master branch

    - by mobile.linkr
    For educational purpose, I am writing a server instance in GCE(google compute engine) to serve a few web apps mostly (to be) written in Dart and Polymer. My workflow is, when my students log-in the server above, they will automatically fork those web apps into their own registries in their own server instances for further development. My issues are, How to serve web applications(they are git registries as well) in GCE like Github Pages? Is it possible to manipulate Github Pages to serve web apps mostly using Dart and Polymer packages? Thanks in advance.

    Read the article

  • Track someone's GitHub repo in a branch

    - by drhorrible
    I'm pretty new to Git, and like it a lot so far, but am not sure what do do here. I've forked a github project, and am currently in the process of porting it to another language. For reference, I've created a branch of the code as it was when I made the fork. My problem now is that the original project has been updated, and I can't figure out how to pull those changes into my branch from the original master (because 'origin' points to my github project). Follow-up question for my own education, what command will the owner of the original project have to run in order to pull a change in from my branch into his master branch?

    Read the article

  • In what circumstances can large pages produce a speedup ?

    - by timday
    Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance improvement. Anyone got any success stories ? There's one particular case I know of myself: using huge pages can dramatically reduce the time needed to fork a large process (presumably as the number of TLB records needing copying is reduced by a factor on the order of 1000). I'm interested in whether huge pages can also benefit more mundane applications though.

    Read the article

  • Rails is not passing the "commit" button parameter

    - by Wayne M
    Reinstalling a Rails app on a new server. Part of the app can fork in one of two directions based on the button the user selects. This part isn't working, and when I look at the log I see the values that I gave the form, execept for the commit portion of the params hash. This seems to be why the app isn't working as expected (since there's nothing in params[:commit], but I have no idea why commit would not be passed in; the request is definitely a POST request, and all of the other parameters are there.

    Read the article

  • What Programming Book would you NOT recommend to Developers?

    - by Ender
    Like a lot of people on Stack Overflow I love to read books about programming, almost as much as I love to read the lists that people add onto their websites, Blog's and this very website. However, for every gem there are a thousand turds, and to one developer a gem could just be a shiny turd to another. Whilst there are hundreds of book questions on this website asking users to recommend books that they have loved I have decided (after looking for a similar question and not finding it) to create a list of books that users have detested. After all, if we're going to fork out money for these books it'd be a good idea to get both positive and negative aspects out there. Please refer to a specific book, and with it add an image of either the latest version or the version you have read. Also, if you have the time please comment on the answers to provide your experiences with the books.

    Read the article

  • Rebasing a core repo in git.

    - by b. e. hollenbeck
    I have a customized fork of CodeIgniter that I use as a standard baseline for several projects. Recently, I've made significant improvements in this repo that I want to use to update the client projects that use it. What I can't seem to figure out is how to pull in the changes to a client project. So I have: Baseline: A--B--C--D--E Client cloned @ C C'--D'--E' And I want to update the client repo to E from the Baseline project. I've tried rebase, and it has erased the files not present in the baseline project (views and such), and creates a bunch of conflicts that really don't need to be conflicts with things like the default HTML5 boilerplate that I use. Is there an option for rebase that I should be using? Is there a different way to approach it? Do I need a bunch of .gitignores for the content directories?

    Read the article

  • What git gotchas have you been caught by?

    - by Bob Aman
    The worst one I've been caught by was with git submodules. I had a submodule for a project on github. The project was unmaintained, and I wanted to submit patches, but couldn't, so I forked. Now the submodule was pointing at the original library, and I needed it to point at the fork instead. So I deleted the old submodule and replaced it with a submodule for the new project in the same commit. Turns out that this broke everyone else's repositories. I'm still not sure what the correct way of handling this situation is, but I ended up deleting the submodule, having everyone pull and update, and then I created the new submodule, and had everyone pull and update again. It took the better portion of a day to figure that out. What have other people done to accidentally screw up git repositories in non-obvious ways, and how did you resolve it?

    Read the article

  • Junit Ant Task, output stack trace

    - by Benju
    I have a number of tests failing in the following JUnit Task. <target name="test-main" depends="build.modules" description="Main Integration/Unit tests"> <junit fork="yes" description="Main Integration/Unit Tests" showoutput="true" printsummary="true" outputtoformatters="true"> <classpath refid="test-main.runtime.classpath"/> <batchtest filtertrace="false" todir="${basedir}"> <fileset dir="${basedir}" includes="**/*Test.class" excludes="**/*MapSimulationTest.class"/> </batchtest> </junit> </target> How do I tell Junit to ouput the errors for each test so that I can look at the stack trace and debug the issues.

    Read the article

  • Java 7 New Features

    - by John W.
    I have done some good reading on the new java.util.concurrent features being introduced with the java 7 release. For instance, Phaser, TransferQueue and the more exciting Fork Join Framework. I recently saw a power point made by Josh Bloch about even more features that are going to be introduced however that link has been lost. For example I remember one change is being able to build a Map the same way you can build an array for: Map myMap = {"1,Dog","2,Cat"}; and so forth (this may not be 100% correct but the idea is there) Does anyone know of a list or just can name some new things to look forward to? Note: I did see a question asked http://stackoverflow.com/questions/213958/new-features-in-java-7 however it was asked ~2 years ago and I am sure the list of updates are more concrete. Thanks!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >