Search Results

Search found 35692 results on 1428 pages for 'build process'.

Page 26/1428 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Build .deb package from source, without installing it

    - by Mechanical snail
    Suppose I have an installer program or source tarball for some program I want to install. (There is no Debian package available.) First I want to create a .deb package out of it, in order to be able to cleanly remove the installed program in the future (see Uninstalling application built from source, If I build a package from source how can I uninstall or remove completely?). Also, installing using a package prevents it from clobbering files from other packages, which cannot be guaranteed if you run the installer or sudo make install. Checkinstall From reading the answers there and elsewhere, I gather the usual solution is to use checkinstall to build the package. Unfortunately, it seems checkinstall does not prevent make install from clobbering system files from other packages. For example, according to Reverting problems caused by checkinstall with gcc build: I created a Debian package from the install using sudo checkinstall -D make install. [...] I removed it using Synaptic Package Manager. As it turns out, [removing] the package checkinstall created from make install tried to remove every single file the installation process touched, including shared gcc libraries like /lib64/libgcc_s.so. I then tried to tell checkinstall to build the package without installing it, in the hope of bypassing the issue. I created a dummy Makefile: install: echo "Bogus" > /bin/qwertyuiop and ran sudo checkinstall --install=no. The file /bin/qwertyuiop was created, even though the package was not installed. In my case, I do not trust the installer / make install to not overwrite system files, so this use of checkinstall is ruled out. How can I build the package, without installing it or letting it touch system files? Is it possible to run Checkinstall in a fakechrooted debootstrap environment to achieve this? Preferably the build should be done as a normal user rather than root, which would prevent the process from overwriting system files if it goes wrong.

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

  • Why does Module::Build's testcover gives me "use of uninitialized value" warnings?

    - by Kurt W. Leucht
    I'm kinda new to Module::Build, so maybe I did something wrong. Am I the only one who gets warnings when I change my dispatch from "test" to "testcover"? Is there a bug in Devel::Cover? Is there a bug in Module::Build? I probably just did something wrong. I'm using ActiveState Perl v5.10.0 with Module::Build version 0.31012 and Devel::Cover 0.64 and Eclipse 3.4.1 with EPIC 0.6.34 for my IDE. UPDATE: I upgraded to Module::Build 0.34 and the warnings are still output. *UPDATE: Looks like a bug in B::Deparse. Hope it gets fixed someday.* Here's my unit test build file: use strict; use warnings; use Module::Build; my $build = Module::Build->resume ( properties => { config_dir => '_build', }, ); $build->dispatch('test'); When I run this unit test build file, I get the following output: t\MyLib1.......ok t\MyLib2.......ok t\MyLib3.......ok All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) But when I change the dispatch line to 'testcover' I get the following output which always includes a bunch of "use of uninitialized value in bitwise and" warning messages: Deleting database D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db t\MyLib1.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib2.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. t\MyLib3.......ok Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. Use of uninitialized value in bitwise and (&) at D:/Perl/lib/B/Deparse.pm line 4252. All tests successful. Files=3, Tests=24, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00 CPU) Reading database from D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db ---------------------------- ------ ------ ------ ------ ------ ------ ------ File stmt bran cond sub pod time total ---------------------------- ------ ------ ------ ------ ------ ------ ------ .../lib/ActivePerl/Config.pm 0.0 0.0 0.0 0.0 0.0 n/a 0.0 ...l/lib/ActiveState/Path.pm 0.0 0.0 0.0 0.0 100.0 n/a 4.8 <SNIP> blib/lib/<SNIP>/MyLib2.pm 100.0 90.0 n/a 100.0 100.0 0.0 98.5 blib/lib/<SNIP>/MyLib3.pm 100.0 90.9 100.0 100.0 100.0 0.6 98.0 Total 14.4 6.7 3.8 18.3 20.0 100.0 11.6 ---------------------------- ------ ------ ------ ------ ------ ------ ------ Writing HTML output to D:/Documents and Settings/<username>/My Documents/<SNIP>/cover_db/coverage.html ... done.

    Read the article

  • PHP+Apache as forward/reverse proxy: ¿how to process client requests and server responses in PHP?

    - by Lightworker
    Hi! I'm having a lot of troubles with the propper configuration of Apache mod_proxy.so to work as desired... The main idea, is to create a proxy on a local machine in a network wich will have the ability to proces a client request (client connected through this Apache prepared proxy) in PHP. And also, it will have the capacity to process the server responses on PHP too. Those are the 2 funcionalities, and they are independent one from each other. Let me present a little schema of what I need to achive: As you can see here, there're 2 ways: blue one and red one. For the blue one, I basically conected a client (Machine B - cell phone) on my local network (home) and configured it to go thorugh a proxy, wich is the Machine A (personal computer) on the exactly same network. So let's say (not DHCP): Machine A: 192.168.1.40 -- Apache is running on this machine, and configured to listen port 80. Machine B (cell phone): 192.168.1.75 -- configured to go throug a proxy, wich is IP 192.168.1.75 and port 80 (basically, Machine A). After configuring Apache properly, wich is basically to remove the "#" from httpd.conf on the lines for the mod_proxy.so (main worker), mod_proxy_connect.so (SSL, allowCONNECT, ...) and mod_proxy_http.so (needed for handle HTTP request/responses) and having in my case, lines like this: # Implements a proxy/gateway for Apache. Include "conf/extra/httpd-proxy.conf" # Various default settings Include "conf/extra/httpd-default.conf" # Secure (SSL/TLS) connections Include "conf/extra/httpd-ssl.conf" wich gives me the ability to configure the file httpd-proxy.conf to prepare the forward proxy or the reverse proxy. So I'm not sure, if what I need it's a forward proxy or a reverse one. For a forward proxy I've done this: <IfModule proxy_module> <IfModule proxy_http_module> # # FORWARD Proxy # #ProxyRequests Off ProxyRequests On ProxyVia On <Proxy *> Order deny,allow # Allow from all Deny from all Allow from 192.168.1 </Proxy> </IfModule> </IfModule> wich basically passes all the packets normally to the server and back to the client. I can trace it perfectly (and testing that works) looking at the "access.log" from Apache. Any request I make with the cell phone, appears then on the Apache log. So it works. But here come the problem: I need to process those client requests. And I need to do it, in PHP. I have read a lot about this. I've read in detail the oficial site from Apache about mod_proxy. And I've searched a lot on forums, but without luck. So I thought about a first aproximation: 1) Forward proxy in Apache, passes all the packets and it's not possible to process them. This seems to be true, so, what about a reverse proxy? So I envisioned something like: ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass http://www.google.com http://www.yahoo.com ProxyPassReverse http://www.google.com http://www.yahoo.com which is just a test, but this should cause on my cell phone that when trying to navigate to Google, I should be going to Yahoo, isn't it? But not. It doesn't work. So you really see, that ALL the examples on Apache reverse proxy, goes like: ProxyPass /foo http://foo.example.com/bar ProxyPassReverse /foo http://foo.example.com/bar wich means, that any kind of request in a local context, will be solved on a remote location. But what I needed is the inverse! It's that when asking for a remote site on my phone, I solve this request on my local server (the Apache one) to process it with a PHP module. So, if it's a forward proxy, I need to pass through PHP first. If it's a reverse proxy, I need to change the "going" direction to my local server one to process first on PHP. Then comes in mind second option: 2) I've seen something like: <Proxy http://example.com/foo/*> SetOutputFilter INCLUDES </Proxy> And I started to search for SetOutputFilter, SetInputFilter, AddOutputFilter and AddInputFilter. But I don't really know how can I use it. Seems to be good, or a solution to me, cause with somethin' like this, I should can add an Input filter to process on PHP the client requests and send back to the client what I programed/want (not the remote server response) wich is the BLUE path on schema, and I should have the ability to add an Output filter wich seems to give me the ability to process the remote server response befor sending it to the client, wich should be the RED path on the schema. Red path, it's just to read server responses and play with em. But nothing more. The Blue path, it's the important one. Cause I will send to the client whatever I want after procesing the requests. I so sorry for this amazingly big post, but I needed to explain it as well as I can. I hope someone will understand my problem, and will help me to solve it! Lot of thanks in advance!! :)

    Read the article

  • Thoughts on moving to Maven in an enterprise environment

    - by Josh Kerr
    I'm interested in hearing from those who either A) use Maven in an enterprise environment or B) tried to use Maven in an enterprise environment. I work for a large company that is contemplating bringing in Maven into our environment. Currently we use OpenMake to build/merge and home-grown software to deploy code to 100+ servers running various platforms (eg. WAS and JBoss). OpenMake works fine for us however Maven does have some ideal features, most importantly being dependency management, but is it viable in a large environment? Also what headaches have/did you incur, if any, in maintaining a Maven environment. Side note, I've read http://stackoverflow.com/questions/861382/why-does-maven-have-such-a-bad-rep, http://stackoverflow.com/questions/303853/what-are-your-impressions-of-maven, and a few other posts. It's interesting seeing the split between developers.

    Read the article

  • CruiseControl: How to read logs from exec task

    - by Marty
    I start an external groovy script via cruisecontrol, which basically works. My problem is that if the groovy script fails I only get the "error string found" in my cruise webapp and email; its even not in the log files. The groovy script writes it output to stdout and to a logfile. How it is possible to display the output of an external script in the cruisecontrol logs? <project name="proj"> <schedule> <exec workingdir="/myscripts/folder" command="//bin/groovy" args="build.groovy -p ${project.name}.properties" errorstr="Exception"/> </schedule> </project>

    Read the article

  • Building libopenmetaverse on CentOS 5

    - by Gary
    I'm trying to build libopenmetaverse on CentOS however I get the following error. I'm not this kind of developer and am installing this for someone else to use. This is just the part of the build that fails. Any ideas? [nant] /opt/libomv/Programs/WinGridProxy/WinGridProxy.exe.build build Buildfile: file:///opt/libomv/Programs/WinGridProxy/WinGridProxy.exe.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /opt/libomv/bin [csc] Compiling 15 files to '/opt/libomv/bin/WinGridProxy.exe'. [resgen] Error: Invalid ResX input. [resgen] Position: Line 2700, Column 5. [resgen] Inner exception: An exception was thrown by the type initializer for System.Drawing.GDIPlus BUILD FAILED External Program Failed: /tmp/tmp5a71a509.tmp/resgen.exe (return code was 1) Total time: 0.4 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 47 seconds. Build Exit Code: 1

    Read the article

  • Losing a programmer, what steps to take?

    - by Zak
    One of the programmers on our team is leaving for greener pastures. We will be going from 6 to 5. What steps should we take to ensure our development process continues to run smoothly, potentially while integrating in new blood. We are currently working on a short release cycle with iterative development. Design - code - review. The person leaving was the most senior dev on the team, and would often give lots of feedback to the rest of the team, especially during the design phase.

    Read the article

  • Do you use Phing?

    - by Sam McAfee
    Does anyone use Phing to deploy PHP applications, and if so how do you use it? We currently have a hand-written "setup" script that we run whenever we deploy a new instance of our project. We just checkout from SVN and run it. It sets some basic configuration variables, installs or reloads the database, and generates a v-host for the site instance. I have often thought that maybe we should be using Phing. I haven't used ant much, so I don't have a real sense of what Phing is supposed to do other than script the copying of files from one place to another much as our setup script does. What are some more advanced uses that you can give examples of to help me understand why we would or would not want to integrate Phing into our process.

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • How to get maven gwt 2.0 build working

    - by Pieter Breed
    EDIT: Added some of the output of the mvn -X -e commands at the end My company is developing a GWT application. We've been using maven 2 and GWT 1.7 successfully for quite a while. We recently decided to upgrade to GWT 2.0. We've already updated the eclipse project and we are able to successfully run the application in dev-mode. We are struggling to get the application built using maven though. I'm hoping somebody can tell me what I'm doing wrong here since I'm running out of time on this. The exacty bit of the output that worries me is the 'GWT compilation skipped' message: [INFO] Copying 119 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Compiling 704 source files to K:\iCura\assessor\target\classes [INFO] [gwt:compile {execution: default}] [INFO] using GWT jars for specified version 2.0.0 [INFO] establishing classpath list (scope = compile) [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [INFO] [jspc:compile {execution: jspc}] [INFO] Built File: \index.jsp I'm pasting the gwt-maven-plugin section below. If you need anything else please ask. <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>1.2</version> <configuration> <localWorkers>1</localWorkers> <warSourceDirectory>${basedir}/war</warSourceDirectory> <logLevel>ALL</logLevel> <module>${cura.assessor.module}</module> <!-- use style OBF for prod --> <style>OBFUSCATED</style> <extraJvmArgs>-Xmx2048m -Xss1024k</extraJvmArgs> <gwtVersion>${version.gwt}</gwtVersion> <disableCastChecking>true</disableCastChecking> <soyc>false</soyc> </configuration> <executions> <execution> <goals> <!-- plugin goals --> <goal>clean</goal> <goal>compile</goal> </goals> </execution> </executions> </plugin> I executed mvn clean install -X -e and this is some of the output that I get: [DEBUG] Configuring mojo 'org.codehaus.mojo:gwt-maven-plugin:1.2:compile' --> [DEBUG] (f) disableCastChecking = true [DEBUG] (f) disableClassMetadata = false [DEBUG] (f) draftCompile = false [DEBUG] (f) enableAssertions = false [DEBUG] (f) extra = K:\iCura\assessor\target\extra [DEBUG] (f) extraJvmArgs = -Xmx2048m -Xss1024k [DEBUG] (f) force = false [DEBUG] (f) gen = K:\iCura\assessor\target\.generated [DEBUG] (f) generateDirectory = K:\iCura\assessor\target\generated-sources\gwt [DEBUG] (f) gwtVersion = 2.0.0 [DEBUG] (f) inplace = false [DEBUG] (f) localRepository = Repository[local|file://K:/iCura/lib] [DEBUG] (f) localWorkers = 1 [DEBUG] (f) logLevel = ALL [DEBUG] (f) module = com.curasoftware.assessor.Assessor [DEBUG] (f) project = MavenProject: com.curasoftware.assessor:assessor:3.5.0.0 @ K:\iCura\assessor\pom.xml [DEBUG] (f) remoteRepositories = [Repository[gwt-maven|http://gwt-maven.googlecode.com/svn/trunk/mavenrepo/], Repository[main-maven|http://www.ibiblio.org/maven2/], Repository[central|http://repo1.maven.org/maven2]] [DEBUG] (f) skip = false [DEBUG] (f) sourceDirectory = K:\iCura\assessor\src [DEBUG] (f) soyc = false [DEBUG] (f) style = OBFUSCATED [DEBUG] (f) treeLogger = false [DEBUG] (f) validateOnly = false [DEBUG] (f) warSourceDirectory = K:\iCura\assessor\war [DEBUG] (f) webappDirectory = K:\iCura\assessor\target\assessor [DEBUG] -- end configuration -- and then this: [DEBUG] SOYC has been disabled by user [DEBUG] GWT module com.curasoftware.assessor.Assessor found in K:\iCura\assessor\src [INFO] com.curasoftware.assessor.Assessor is up to date. GWT compilation skipped [DEBUG] com.curasoftware.assessor:assessor:war:3.5.0.0 (selected for null) [DEBUG] com.curasoftware.dto:dto-gen:jar:3.5.0.0:compile (selected for compile) ... It's finding the correct sourceDirectory. That folders has a 'com' folder within which ultimately is the source of the application organized in folders as per the package structure.

    Read the article

  • Grails and Flex build process integration

    - by Dan
    I plan to use Grails and Flex in my next project. I would like to use grails command line to construct my project. This should include the flex part as well, compiling swf, executing FlexUnits etc. I would like to compile and add swf file to war when I do “grails war”. How can I accomplish this?

    Read the article

  • Multiple threads or process with threads

    - by sergiobuj
    Hi, this is for an assignment so I'm not looking for code. I have to simulate a game where each player has turns and needs to 'pay attention' to what's going on. So far, i know I'll need two threads for each player, one that will sleep until the player's turn and the other paying attention. My question is: Should I work each player as a 'fork' and the threads on the fork, or just create some threads for the player and associate them somehow? It's the first time I've worked with concurrency, semaphores and threads so I'm not sure about the good practices and programming style. Thanks!

    Read the article

  • Unable to build mercurial on OSX - Python.h not found

    - by Oscar Reyes
    For what I've read I need Python-Dev, how do I install it on OSX? I think the problem I have, is, my Xcode was not properly installed, and I don't have the paths where I should. This previous question: http://stackoverflow.com/questions/2685887/where-is-gcc-on-osx-i-have-installed-xcode-already Was about I couldn't find gcc, now I can't find Python.h Should I just link my /Developer directory to somewhere else in /usr/ ??? This is my output: $ sudo easy_install mercurial Password: Searching for mercurial Reading http://pypi.python.org/simple/mercurial/ Reading http://www.selenic.com/mercurial Best match: mercurial 1.5.1 Downloading http://mercurial.selenic.com/release/mercurial-1.5.1.tar.gz Processing mercurial-1.5.1.tar.gz Running mercurial-1.5.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_7RaTq/mercurial-1.5.1/egg-dist-tmp-l7JP3u mercurial/base85.c:12:20: error: Python.h: No such file or directory ... Thanks in advance.

    Read the article

  • Worker process reached its allowed processing time

    - by Confused
    We are experiencing this issue approximately once a month. It is very hard to pinpoint the cause so any help would be appreciated. This causes the App pool to stop and brings the site down. We have gone through all log files and have concluded nothing. We are using the 2.0.3 version on IIS 6.

    Read the article

  • Hudson: how do i use a parameterized build to do svn checkout and svn tag?

    - by Derick Bailey
    I'm setting up a parameterized build in hudson v1.362. the parameter i'm creting is used to determine which branch to checkout in subversion. I can set my svn repository url like this: https://my.svn.server/branches/${branch} and it does the checkout and the build just fine. now I want to tag the build after it finishes. i'm using the SVN tagging plugin for hudson to do this. so i go to the bottom of the project config screen for the hudson project and turn on "Perform Subversion tagging on successful build". here, i set my Tag Base URL to https://my.svn.server/tags/${branch}-${BUILD_NUMBER} and it gives me errors about those properties not being found. so i change them to environment variable usages like this: https://my.svn.server/tags/${env['branch']}-${env['BUILD_NUMBER']} and the svn tagging plugin is happy. the problem now is that my svn repository at the top is using the ${branch} syntax and the svn tagging plugin barfs on this: moduleLocation: Remote -https://my.svn.server/branches/$branch/ Tag Base URL: 'https://my.svn.server/tags/thebranchiused-1234'. There was no old tag at https://my.svn.server/tags/thebranchiused-1234. ERROR: Publisher hudson.plugins.svn_tag.SvnTagPublisher aborted due to exception java.lang.NullPointerException at hudson.plugins.svn_tag.SvnTagPlugin.perform(SvnTagPlugin.java:180) at hudson.plugins.svn_tag.SvnTagPublisher.perform(SvnTagPublisher.java:79) at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:601) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:580) at hudson.model.AbstractBuild$AbstractRunner.performAllBuildSteps(AbstractBuild.java:558) at hudson.model.Build$RunnerImpl.cleanUp(Build.java:167) at hudson.model.Run.run(Run.java:1295) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:124) Finished: FAILURE notice the first line, there: the svn tag is looking at ${branch} as part of the repository url... it's not parsing out the property value. i tried to change my original Repository URL for svn to use the ${env['branch']} syntax, but that blows up on the original checkout because this syntax is not getting parsed at all by the checkout. help?! how do i use a parameterized build to set the svn url for checkout and for tagging my build?!

    Read the article

  • C# Process flow - Datastream, XML and datagrid

    - by Farstucker
    Im looking for some advice/suggestions on how I should setup the work flow of a small application Im building. When the application is launched the datagrid will be populated via the XML file. Once running the application will receive a datastream that I hope to update the file and datagrid. So Im curious what you would suggest on how I setup the workflow (ie, split the data from the data stream and simultaneously populate the file and grid or would you suggest populating the XML file first and setting up a timer to have the grid read the file?) Im really looking for optimal performance.

    Read the article

  • Build 1 war from two separate web applications using ANT

    - by Pich
    Hi, Is it possible, by using ANT, to create 1 war-file out of two separate eclipse java web application projects? Besides from just copying the right files to the right places i would have to be able two create one single web.xml. Also some other files that exists in both projects should be united into one file. Thanks Pich

    Read the article

  • Benefits of 'Optimize code' option in Visual Studio build

    - by gt
    Much of our C# release code is built with the 'Optimize code' option turned off. I believe this is to allow code built in Release mode to be debugged more easily. Given that we are creating fairly simple desktop software which connects to backend Web Services, (ie. not a particularly processor-intensive application) then what if any sort of performance hit might be expected? And is any particular platform likely to be worse affected? Eg. multi-processor / 64 bit.

    Read the article

  • Build 1 war from two seperate web applications using ANT

    - by Pich
    Hi, Is it possible, by using ANT, to create 1 war-file out of two seperate eclipse java web application projects? Besides from just copying the right files to the right places i would have to be able two create one single web.xml. Also some other files that exists in both projects should be united into one file. Thanks Pich

    Read the article

  • Excel process not ending in Cluster environment

    - by Vasanth
    When we try to close excel object, it fails to close to cluster environment. The same is working fine in QA and UAT environment. public bool KillExcelProcess() { try { object misValue = System.Reflection.Missing.Value; wbObj.Save(); wbObj.Close(true, misValue, misValue); appC.Workbooks.Close(); appC.Quit(); System.Runtime.InteropServices.Marshal.ReleaseComObject(objSheet); System.Runtime.InteropServices.Marshal.ReleaseComObject(wbObj); System.Runtime.InteropServices.Marshal.ReleaseComObject(appC); wbObj = null; appC = null; } catch (Exception ex) { //throw ex; } finally { System.Threading.Thread.Sleep(5000); GC.Collect(); } return true; Calling function #endregion try { log.Info("CloseExcelService (MeasureSavingsComputeBO) Starts ..."); exConverter.KillExcelProcess(); while (true) { try { File.Delete(strFilename); break; } catch (Exception ex) { } }

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >