Search Results

Search found 24226 results on 970 pages for 'team foundation build'.

Page 13/970 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Visual Studio 2012 Release Candidate disponible, avec .NET Framework 4.5 et Team Foundation Server 2012

    Visual Studio 2012 Release Candidate disponible avec .NET Framework 4.5 et Team Foundation Server 2012 Comme il est de coutume depuis la publication de la Developer Preview de Windows 8, l'OS s'accompagne toujours des outils de développement de Microsoft. La société ne déroge pas à cette règle et publie à la suite de la Release Preview de Windows 8, la Release Candidate de Visual Studio 11, avec pour nom officiel Visual Studio 2012, du Framework .NET 4.5 et de Team Foundation Server 2012. L'environnement de développement qui entre dans la dernière ligne droite de son cycle de développement, arbore pour c...

    Read the article

  • CAM v2.0 ships – all new foundation version

    - by drrwebber
    The latest release of the CAM editor toolset is now available on Sourceforge.net – search NIEM. In this all new version the support from Oracle has enabled a transformation of the editor underpinning Java framework and results in 3x performance improvement and 50% better memory utilization. The result of nearly six months of improvements are catalogued in the release notes. http://sourceforge.net/projects/camprocessor/files/CAM%20Editor/Releases/2.0/CAM_Editor_2-0_Release_Notes.pdf/download However here I’d like to talk about the strategic vision and highlight specific new go to features that make a difference for exchange schema designers and with a focus on the NIEM community. So why is this a foundation version? Basically the new drag and drop designer tool allows you to tailor your own dictionary collection of components and then simply select and position those into your resulting exchange structure. This is true global reuse enabled from a canonical domain dictionary collection. So instead of grappling with XSD Schema syntax, or UML model nuances – this is straightforward direct WYSIWYG visual engineering – using familiar sets of business components. Then the toolkit writes the complex XSD Schema for you, along with test samples, documentation, XMI/UML models, Mindmaps and more. So how do you get a set of business components? The toolkit allows you to harvest these from existing schema collections or enterprise data models, or as in the case of NIEM, existing domain dictionary collections. I’ve been using this for the latest IEEE/OASIS/NIST initiative on a Common Data Format (CDF) for elections management systems. So you can download those from OASIS and see how this can transform how you build actual business exchanges – improving the quality, consistency and usability – and dramatically allowing automated generation of artifacts you only dreamed of before – such as a model of your entire major exchange collection components. http://www.oasis-open.org/committees/documents.php?wg_abbrev=election So what we have here is a foundation version – setting the scene and the basis for changing how people can generate and manage information exchanges. A foundation built using the OASIS CAM standard combined with aspects of the NIEM Naming and Design Rules and the UN/CEFACT Core Components specifications and emerging work on OASIS CIQ name and address and ANSI/ISO code list schema. We still have a raft of work to do to integrate this into SOA best practices and extend the dictionary capabilities to assist true community development. Answering questions such as: - How good is my canonical component collection? - How much reuse is really occurring? - What inconsistencies and extensions are there in the dictionary components? Expect us to begin tackling these areas now that the foundation is in place. The immediate need is to develop training and self-start materials – so we will be focusing there for the next couple of months and especially leading up to the IJIS industry event in July in New Jersey, and the NIEM NTE event in August in Philadelphia. http://sourceforge.net/projects/camprocessor

    Read the article

  • Must read books for a programming team leader

    - by takeshin
    As a programming team leader, which books do you recommend? Books about HR, good programming practices etc. I have recently seen PHP Team Development but it is not mind blowing for experienced developers. One of the best I can recommend is Microsoft Manual of Style for Technical Publications, which helped me a lot in improving the language and practices of documenting the code.

    Read the article

  • How do I set the Eclipse build path and class path from an Ant build file?

    - by Nels Beckman
    Hey folks, There's a lot of discussion about Ant and Eclipse, but no previously answered seems to help me. Here's the deal: I am trying to build a Java program that compiles successfully with Ant from the command-line. (To confuse matters further, the program I am attempting to compile is Ant itself.) What I really want to do is to bring this project into Eclipse and have it compile in Eclipse such that the type bindings and variable bindings (nomenclature from Eclipse JDT) are correctly resolved. I need this because I need to run a static analysis on the code that is built on top of Eclipse JDT. The normal way I bring a Java project into Eclipse so that Eclipse will build it and resolve all the bindings is to just import the source directories into a Java project, and then tell it to use the src/main/ directory as a "source directory." Unfortunately, doing that with Ant causes the build to fail with numerous compile errors. It seems to me that the Ant build file is setting up the class path and build path correctly (possibly by excluding certain source files) and Eclipse does not have this information. Is there any way to take the class path & build path information embedded in an Ant build file, and given that information to Eclipse to put in its .project and .classpath files? I've tried, creating a new project from an existing build file (an option in the File menu) but this does not help. The project still has the same compile errors. Thanks, Nels

    Read the article

  • Xcode "Build and Archive" from command line

    - by Dan Fabulich
    Xcode 3.2 provides an awesome new feature under the Build menu, "Build and Archive" which generates an .ipa file suitable for Ad Hoc distribution. You can also open the Organizer, go to "Archived Applications," and "Submit Application to iTunesConnect." Is there a way to use "Build and Archive" from the command line (as part of a build script)? I'd assume that xcodebuild would be involved somehow, but the man page doesn't seem to say anything about this. UPDATE Michael Grinich requested clarification; here's what exactly you can't do with command-line builds, features you can ONLY do with Xcode's Organizer after you "Build and Archive." You can click "Share Application..." to share your IPA with beta testers. As Guillaume points out below, due to some Xcode magic, this IPA file does not require a separately distributed .mobileprovision file that beta testers need to install; that's magical. No command-line script can do it. For example, Arrix's script (submitted May 1) does not meet that requirement. More importantly, after you've beta tested a build, you can click "Submit Application to iTunes Connect" to submit that EXACT same build to Apple, the very binary you tested, without rebuilding it. That's impossible from the command line, because signing the app is part of the build process; you can sign bits for Ad Hoc beta testing OR you can sign them for submission to the App Store, but not both. No IPA built on the command-line can be beta tested on phones and then submitted directly to Apple. I'd love for someone to come along and prove me wrong: both of these features work great in the Xcode GUI and cannot be replicated from the command line.

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Blink build with Xcode failed

    - by Merci
    I found a GPL-ed SIP client for Mac, Blink. I'd like to build it from source since the binaries are only available as paid download. Just FYI i'm studying programming at university but have no experience in building complex application from source. After downloading the content of the repository i opened the Xcode project and tried to build on OS X 10.7, Xcode 4.2.1. Unfortunately the build fail with 1 error and many warnings Most of the warnings are like this: Attribute Unavailable: Custom Identifiers in Interface Builder versions prior to 3.2 The error message is: Apple Mach-O Linker (ld) Error Command /Developer/usr/bin/clang failed with exit code 1 preceded by the warning Apple Mach-O Linker (ld) Warning directory not found for option '-L/Users/Sergio/Downloads/Blink/devel.ag-projects.com/repositories/public/blink-cocoa/Distribution/Frameworks' I notice that in the list of required files i have this files missing: Dependencies/Frameworks libgcrypt.11.6.0.dylib libgcrypt.11.dylib libgnutls-extra.26.dylib libgnutls.26.dylib libgpg-error.0.dylib libintl.8.dylib liblzo.1.dylib libtasn1.3.dylib Dependencies/Resources lib Frameworks/Linked Frameworks Sparkle.framework Products Blink.app It should be possible to download these files somewhere. Unfortunately googling did not help. There's no documentation on the project site.

    Read the article

  • Some unit tests fail in automated Team Build task

    - by weenet
    I have an odd situation. I have a suite of unit tests that pass on my dev machine. They pass on the build machine if run from visual studio. But 5 of them reliably fail during the automated build. There is nothing noteworthy about the ones that fail that I can see (and I've stared at them a long time). Anyone seen anything like this? Is there a way to see the test output in the Team Build log? All I get is Passed or Failed messages, but not the Assert message. Thanks!

    Read the article

  • How to include asp.net assets when using Team Build 2008

    - by bonskijr
    I am able to configure our Build Server (Team Build 2008) to build our asp.net application. I've done so via <ConfigurationToBuild Include="Debug|Mixed Platforms"> <FlavorToBuild>Debug</FlavorToBuild> <PlatformToBuild>Mixed Platforms</PlatformToBuild> </ConfigurationToBuild> Problem though, the asp.net assets(eg. script folders, imgs, etc.) are not copied to the deployment folder. Folder(_PublishedWebsites) only contains the binaries references of the app plus the pre-compiled web services. Is there a way to include said folders/files to the deployment folder? Thanks

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Updated 9th May 2010 – I have now presented at both of these sessions  and posted about it. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • The cost of Programmer Team Clustering

    - by MarkPearl
    I recently was involved in a conversation about the productivity of programmers and the seemingly wide range in abilities that different programmers have in this industry. Some of the comments made were reiterated a few days later when I came across a chapter in Code Complete (v2) where it says "In programming specifically, many studies have shown order-of-magnitude differences in the quality of the programs written, the sizes of the programs written, and the productivity of programmers". In line with this is another comment presented by Code Complete when discussing teams - "Good programmers tend to cluster, as do bad programmers". This is something I can personally relate to. I have come across some really good and bad programmers and 99% of the time it turns out the team they work in is the same - really good or really bad. When I have found a mismatch, it hasn't stayed that way for long - the person has moved on, or the team has ejected the individual. Keeping this in mind I would like to comment on the risks an organization faces when forcing teams to remain together regardless of the mix. When you have the situation where someone is not willing to be part of the team but still wants to get a pay check at the end of each month, it presents some interesting challenges and hard decisions to make. First of all, when this occurs you need to give them an opportunity to change - for someone to change, they need to know what the problem is and what is expected. It is unreasonable to expect someone to change but have not indicated what they need to change and the consequences of not changing. If after a reasonable time of an individual being aware of the problem and not making an effort to improve you need to do two things... Follow through with the consequences of not changing. Consider the impact that this behaviour will have on the rest of the team. What is the cost of not following through with the consequences? If there is no follow through, it is often an indication to the individual that they can continue their behaviour. Why should they change if you don't care enough to keep your end of the agreement? In many ways I think it is very similar to the "Broken Windows" principles – if you allow the windows to break and don’t fix them, more will get broken. What is the cost of keeping them on? When keeping a disruptive influence in a team you risk loosing the good in the team. As Code Complete says, good and bad programmers tend to cluster - they have a tendency to keep this balance - if you are not going to help keep the balance they will. The cost of not removing a disruptive influence is that the good in the team will eventually help you maintain the clustering themselves by leaving.

    Read the article

  • New functionality in TFS Build Manager &ndash; Managing Triggers and Build Resources

    - by Jakob Ehn
    Yesterday we pushed out a new release (August 2012) of the Community TFS Build Extension, including a new version of the Community TFS Build Manager (1.0.4.6) The two big new features in the Build Manager in this release are: Set Triggers It is now possible to select one or more build definitions and update the triggers for them in one simple operation: You’ll note that we have started collapsing the context menu a bit, the list of commands are getting long! When selecting the Trigger command, you’ll see a dialog where the options should be self-explanatory: The only thing missing here is the Scheduled trigger option, you’ll have to do that using Team Explorer for now.   Manage Build Resources The other feature is that it is now possible to view the build controllers and agents in your current collection and also perform some actions against them. The new functionality is available by select the Build Resources item in the drop down menu: Selecting this, you’ll see a (sort of) hierarchical view of the build controllers and their agents: In this view you can quickly see all the resources and their status. You can also view the build directory of each build agent and the tags that are associated with them. On the action menu, you can enable and disable both agents and controllers (several at a time), and you can also select to remove them. By selecting Manage, you’ll be presented with the standard Manage Controller dialog from Visual Studio where you can set the rest of the properties. Hopefully we’ll be able to implement most of the existing functionality so that we can remove that menu option Our plan is to add more functionality to this view, such as adding new agents/controllers, restarting build service hosts, maybe view diagnostic information such as disk space and error logs.   Hope you’ll find the new functionality useful. Remember to log any bugs and feature requests on the CodePlex site. Happy building!

    Read the article

  • Installing Team Explorer 2008 on Windows Server 2003

    - by BriteShiny
    I am attempting to install Team Explorer 2008 on a Windows Server 2008 box, without success, which is why I am here. The error log reveals the following: [07/02/10,10:07:03] Microsoft Visual Studio 2008 Shell (integrated mode): d:.\wcu\ppe\vside.exe exited with return value 1 [07/02/10,10:07:03] InstallReturnValue: GFN_MID VS PPE, 0x1 [07/02/10,10:07:03] Setup.exe: AddGlobalCustomProperty [07/02/10,10:07:03] Microsoft Visual Studio 2008 Shell (integrated mode): ERRORLOG EVENT : Error code 1 for this component means "Incorrect function. " [07/02/10,10:07:03] Setup.exe: AddGlobalCustomProperty [07/02/10,10:07:03] Microsoft Visual Studio 2008 Shell (integrated mode): ERRORLOG EVENT : Component Microsoft Visual Studio 2008 Shell (integrated mode) returned an unexpected value. [07/02/10,10:07:03] Setup.exe: AddGlobalCustomProperty [07/02/10,10:07:03] Microsoft Visual Studio 2008 Shell (integrated mode): ERRORLOG EVENT : Return from system messaging: Incorrect function. Apparently the Team Explorer 2008 is incompatible with Windows Server 2008. If you right click on the setup.exe in the TFS Explorer ISO and run a compatibility check it fails. There is a separate installer package to install the VS2008 Shell that is compatible with Windows 2008, but it fails too. Has anyone else been able to install Team Explorer 2008 on Windows Server 2008?

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • Error during maven/ant build: "[java] Timestamp response not valid"

    - by fei
    My maven build started failing randomly, and it got the following error which I cannot make sense of, and googling it doesn't give me anything useful: [echo] Creating a full package... [java] Timestamp response not valid [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to execute: Executing Ant script: /airtest.build.xml [package-admin-air]: Failed to execute. Java returned: 10 [exec] [DEBUG] Trace [exec] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute: Executing Ant script: /airtest.build.xml [package-admin-air]: Failed to execute. [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) [exec] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) [exec] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) [exec] at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) [exec] at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) [exec] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [exec] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [exec] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [exec] at java.lang.reflect.Method.invoke(Method.java:597) [exec] at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) [exec] at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) [exec] at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) [exec] at org.codehaus.classworlds.Launcher.main(Launcher.java:375) [exec] Caused by: org.apache.maven.plugin.MojoExecutionException: Failed to execute: Executing Ant script: /airtest.build.xml [package-admin-air]: Failed to execute. [exec] at org.apache.maven.script.ant.AntMojoWrapper.execute(AntMojoWrapper.java:56) [exec] at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) [exec] at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) [exec] ... 17 more [exec] Caused by: org.codehaus.plexus.component.factory.ant.AntComponentExecutionException: Executing Ant script: /airtest.build.xml [package-admin-air]: Failed to execute. [exec] at org.codehaus.plexus.component.factory.ant.AntScriptInvoker.invoke(AntScriptInvoker.java:227) [exec] at org.apache.maven.script.ant.AntMojoWrapper.execute(AntMojoWrapper.java:52) [exec] ... 19 more [exec] Caused by: C:\Users\dev\plexus-ant-component4263631821803364095.build.xml:445: Java returned: 10 [exec] at org.apache.tools.ant.taskdefs.Java.execute(Java.java:87) [exec] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275) [exec] at org.apache.tools.ant.Task.perform(Task.java:364) [exec] at org.apache.tools.ant.Target.execute(Target.java:341) [exec] at org.apache.tools.ant.Target.performTasks(Target.java:369) [exec] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216) [exec] at org.apache.tools.ant.Project.executeTarget(Project.java:1185) [exec] at org.codehaus.plexus.component.factory.ant.AntScriptInvoker.invoke(AntScriptInvoker.java:222) This is a random error that pops up in various point during the build process, and sometimes the build will succeed and then the next one will fail again. This is really weird, does anyone seen this before? I'm using maven 2.2.1 BTW, the error return code 10 in windows mean "Environment is invalid.

    Read the article

  • TeamCity - Build triggering on specific file, Mercurial

    - by Garrett
    Hi I'm trying to get my build to trigger only when i create a Tag in Mercurial. The way im trying to do this is by creating an additional Build Config (Tag Conf) for my project where I set the VCS build trigger to: +:/.hgtags --Trigger only when tags are updated -:. --Do not trigger on any other files Whenever i push a changeset (without a Tag) in the overview my build conf (Tag Conf) says "X Pending", i suspect this is the changesets. And when I create a Tag in Mercurial, a build i is triggered and the X Pending goes away. Then all there is left for me todo is to update build/rev numbers in AssemblyInfo (somehow) and deploy the Artifacts(somehow). Question: Is this the correct way to do this or are there another/better way to do this? (Im using sln2010 runner + NUnit + Mercurial) Kind Regards

    Read the article

  • NetBeans behaves differently if project is run via "Run Project" or build.xml>run

    - by Rogach
    I slightly modified the build-impl.xml file of my NetBeans project. (Specifically, I made it to insert build time into program code). If I run project via build.xml "run" target, I get behavior I expect - the program displays build time and date. But if I run project using standard (and most obvious, used it always) button "Run Main Project", I get totally another result (no build date). Moreover, if I insert any code into build.xml, I still get result if I run the target explicitly and no result if it is run simply by NetBeans. And this leads me to conclusion, that this button uses another method to run my application. My question is: what does that button do? What method does it call? And can it be configured to run the needed target of make file?

    Read the article

  • Windows Workflow Foundation - Application-Integrated Debugging

    - by user292103
    I've got a typical n-tier app that has a heavy workflow component to it, so I'm interested in using WWF. There's a server-side piece that runs as a Windows Service, and there's the client-side piece written in Silverlight. To have a really great, seamlessly integrated experience for my users, what I want is to incorporate both a workflow designer and a workflow debugger into the application. Not Visual Studio, but something tightly integrated right into the app itself. Using the Silverlight client, the user (probably more of a power user) can design workflows. But not only that, they can open a debugger from within the Silverlight client, set breakpoints (which are really remote breakpoints back to the Windows Service), catch in-process workflows, and step through them. Wouldn't that be great? I think I have some idea how I might go about incorporating an integrated designer (use a Silverlight diagramming component, save the diagram to .XAML, parse the .XAML to re-create the diagram, etc., etc.) but how on Earth would I do the debugger? I have no idea how I would do that part. Is there some kind of debugging support engineered into WWF?

    Read the article

  • Apache Ant Build command "Access Denied"

    - by Luis Armando
    Hey! I am trying to get ant installed and actually already did following this instructions however, I get this error: Buildfile: build.xml does not exist! Build failed which it says there I might get so I just tried executing the next command it says I should(since I'm under Windows it's this one): build -Ddist.dir=<C:\Ant> dist anyway I get "access denied" when hitting enter and I can't figure out why. I also tried build install and build install-lite but I always get that message =/ any ideas why? or what am I doing wrong? Edit Without the < I get a: 'build' is not recognized as an internal or external command, operable program or batch file. Edit2 Well, my ANT_HOME is in C:\Ant and I'm trying to run the command while placing myself on that folder, isn't that correct?

    Read the article

  • Workflow Foundation 4 - DeclarativeServiceLibrary - Error while calling second ReceiveAndSendReply

    - by dotnetexperiments
    Hi, I have created a DeclarativeServiceLibrary using VS2010 beta 2, Please check this image of Sequential Service Following is the code used to call these two activities ` int? data = 123; ServiceReference1.ServiceClient client1 = new ServiceReference1.ServiceClient(); string result1 = client1.GetData(data); //This line shows error :( string result2 = client1.Operation1(); Response.Write(result1 + " :: ::" + result2);` client1.GetData works perfectly, but client1.Operation1 show the following error. Please let me know how to fix this. There is no context attached to the incoming message for the service and the current operation is not marked with "CanCreateInstance = true". In order to communicate with this service check whether the incoming binding supports the context protocol and has a valid context initialized.

    Read the article

  • Windows Workflow Foundation 4 (WF4) Delay

    - by Russ Clark
    I'm working with the the Release Candidate of Visual Studio 2010 using Wf4 to write a new workflow for approving resource requests. In my workflow, I would like for a request to expire after a few days if no approval has been made for the request. We did this in WF 3.5 (Visual Studio 2008) by adding a Delay timer into an EventDrivenActivity parallel to the EventDrivenActivity that was awaiting an approver to come and approve the request. If the Delay expired before an approval was made, the EventDrivenActivity would terminate the request. Does anyone know if there is a similar mechanism for doing this in WF4?

    Read the article

  • Microsoft Team Foundation Equivalent stack.

    - by Nix
    I am looking for a free alternative to TFS. What would be the best alternative stack(source control, bug tracking, project management/planning, wiki, automated builds (ci))? Keeping in mind that it would be nice if they all integrated well. For example, it would be nice to be able to link bugs to source control, and then be able to link to a project plan and then be able to automate building. I do not have issues with using Microsoft project to manage project planing. I know i would like to use these....: SVN TeamCity NUnit But i am struggling to find a good Wiki/Project Planning/Bug tracking, that would integrate well. Any questions let me know.

    Read the article

  • Batch build using IAR tools

    - by Jim Tshr
    I am trying to do a batch build of a project using IAR tools. The processor is a CC2530, and it builds fine in the IDE. I have followed the documentation for batch build (Project/Batch Build) and created a .cspy file that is suppose to be my batch file, but in the comments in that file it indicates that I need a debug file (.ubrof) to execute with it. I can't find a .ubrof file and I have searched the whole project directory structure. Also, I want my batch build to build a production version without the debugging information. Where do I get a .ubrof file? How do I do a production batch build using IAR tools?

    Read the article

  • How to resolve CGDirectDisplayID changing issues on newer multi-GPU Apple laptops in Core Foundation

    - by Dave Gallagher
    In Mac OS X, every display gets a unique CGDirectDisplayID number assigned to it. You can use CGGetActiveDisplayList() or [NSScreen screens] to access them, among others. Per Apple's docs: A display ID can persist across processes and system reboot, and typically remains constant as long as certain display parameters do not change. On newer mid-2010 MacBook Pro's, Apple started using auto-switching Intel/nVidia graphics. Laptops have two GPU's, a low-powered Intel, and a high-powered nVidia. Previous dual-GPU laptops (2009 models) didn't have auto-GPU switching, and required the user to make a settings change, logoff, and then logon again to make a GPU switch occur. Even older systems only had one GPU. There's an issue with the mid-2010 models where CGDirectDisplayID's don't remain the same when a display switches from one GPU to the next. For example: Laptop powers on. Built-In LCD Screen is driven by Intel chipset. Display ID: 30002 External Display is plugged in. Built-In LCD Screen switches to nVidia chipset. It's display ID changes: 30004 External Display is driven by nVidia chipset. ...at this point, the Intel chipset is dormant... User unplugs External Display. Built-In LCD Screen switches back to Intel chipset. It's display ID changes back to original: 30002 My question is, how can I match an old display ID to a new display ID when they alter due to a GPU change? Thought about: I've noticed that the display ID only changes by 2, but I don't have enough test Mac's available to determine if this is common to all new MacBook Pro's, or just mine. Kind of a kludge if "just check for display ID's which are +/-2 from one another" works, anyway. Tried: CGDisplayRegisterReconfigurationCallback(), which notifies before-and-after when displays are going to change, has no matching logic. Putting something like this inside a method registered with it doesn't work: // Run before display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef oldInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); // ...display settings change... // Run after display settings change: CGDirectDisplayID directDisplayID = ...; io_service_t servicePort = CGDisplayIOServicePort(directDisplayID); CFDictionaryRef newInfoDict = IODisplayCreateInfoDictionary(servicePort, kIODisplayMatchingInfo); BOOL match = IODisplayMatchDictionaries(oldInfoDict, newInfoDict, 0); if (match) NSLog(@"Displays are a match"); else NSLog(@"Displays are not a match"); What's happening above is I'm caching oldInfoDict before display settings change, letting them change, and then comparing it to newInfoDict by using IODisplayMatchDictionaries(), which will say either "yes, both displays are the same!" or "no, both displays are not the same." Unfortunately, it does not return YES if GPU's have changed for a monitor. Example of the dictionary's it's comparing: // oldInfoDict (Display ID: 30002) oldInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/IGPU@2/AppleIntelFramebuffer/display0/AppleBacklightDisplay"; } // newInfoDict (Display ID: 30004) newInfoDict: { DisplayProductID = 40144; DisplayVendorID = 1552; IODisplayLocation = "IOService:/AppleACPIPlatformExpert/PCI0@0/AppleACPIPCI/P0P2@1/IOPCI2PCIBridge/GFX0@0/NVDA,Display-A@0/NVDA/display0/AppleBacklightDisplay"; } As you can see, the IODisplayLocation key changes when GPU's are switched, hence IODisplayMatchDictionaries() doesn't work. I can, theoretically, compared just the DisplayProductID and DisplayVendorID keys, but I'm writing end-user software, and am worried of a situation where users have two or more identical monitors plugged in. Any help is greatly appreciated! :)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >