Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 114/916 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • How best to present a security vulnerability to a web development team in your own company?

    - by BigCoEmployee
    Imagine the following scenario: You work at Big Co. and your coworkers down the hall are on the web development team for Big Co's public blog system, which a lot of Big Co employees and some public people use. The blog system allows any HTML and JavaScript, and you've been told that it was a choice (not by accident) but you aren't sure if they realize the implications of this. So you want to convince them that this is a bad idea. You write some demonstration code and plant a XSS script in your own blog, and then write some blog posts. Soon after, the head blog admin (down the hall) visits your blog post and the XSS sends his cookies to you. You copy them into your browser and you are now logged in as him. Okay, now you're logged in as him... And you start realizing that it maybe wasn't such a good idea to go ahead and 'hack' the blog system. But you are a good guy! You don't touch his account after logging into it, and you definitely don't plan on publicizing this weakness; you just maybe want to show them that the public is able to do this, so that they can fix it before someone malicious realizes the same thing! What is the best course of action from here?

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Compiling OpenCV in Android NDK

    - by evident
    PLEASE SEE THE ADDITIONS AT THE BOTTOM! The first problem is solved in Linux, not under Windows and Cygwin yet, but there is a new problem. Please see below! I am currently trying to compile OpenCV for Android NDK so that I can use it in my apps. For this I tried to follow this guide: http://www.stanford.edu/~zxwang/android_opencv.html But when compiling the downloaded stuff with ndk-build I get this error: $ /cygdrive/u/flori/workspace/android-ndk-r5b/ndk-build Compile++ thumb : opencv <= cvjni.cpp Compile++ thumb : cxcore <= cxalloc.cpp Compile++ thumb : cxcore <= cxarithm.cpp Compile++ thumb : cxcore <= cxarray.cpp Compile++ thumb : cxcore <= cxcmp.cpp Compile++ thumb : cxcore <= cxconvert.cpp Compile++ thumb : cxcore <= cxcopy.cpp Compile++ thumb : cxcore <= cxdatastructs.cpp Compile++ thumb : cxcore <= cxdrawing.cpp Compile++ thumb : cxcore <= cxdxt.cpp Compile++ thumb : cxcore <= cxerror.cpp Compile++ thumb : cxcore <= cximage.cpp Compile++ thumb : cxcore <= cxjacobieigens.cpp Compile++ thumb : cxcore <= cxlogic.cpp Compile++ thumb : cxcore <= cxlut.cpp Compile++ thumb : cxcore <= cxmathfuncs.cpp Compile++ thumb : cxcore <= cxmatmul.cpp Compile++ thumb : cxcore <= cxmatrix.cpp Compile++ thumb : cxcore <= cxmean.cpp Compile++ thumb : cxcore <= cxmeansdv.cpp Compile++ thumb : cxcore <= cxminmaxloc.cpp Compile++ thumb : cxcore <= cxnorm.cpp Compile++ thumb : cxcore <= cxouttext.cpp Compile++ thumb : cxcore <= cxpersistence.cpp Compile++ thumb : cxcore <= cxprecomp.cpp Compile++ thumb : cxcore <= cxrand.cpp Compile++ thumb : cxcore <= cxsumpixels.cpp Compile++ thumb : cxcore <= cxsvd.cpp Compile++ thumb : cxcore <= cxswitcher.cpp Compile++ thumb : cxcore <= cxtables.cpp Compile++ thumb : cxcore <= cxutils.cpp StaticLibrary : libstdc++.a StaticLibrary : libcxcore.a Compile++ thumb : cv <= cvaccum.cpp Compile++ thumb : cv <= cvadapthresh.cpp Compile++ thumb : cv <= cvapprox.cpp Compile++ thumb : cv <= cvcalccontrasthistogram.cpp Compile++ thumb : cv <= cvcalcimagehomography.cpp Compile++ thumb : cv <= cvcalibinit.cpp Compile++ thumb : cv <= cvcalibration.cpp Compile++ thumb : cv <= cvcamshift.cpp Compile++ thumb : cv <= cvcanny.cpp Compile++ thumb : cv <= cvcolor.cpp Compile++ thumb : cv <= cvcondens.cpp Compile++ thumb : cv <= cvcontours.cpp Compile++ thumb : cv <= cvcontourtree.cpp Compile++ thumb : cv <= cvconvhull.cpp Compile++ thumb : cv <= cvcorner.cpp Compile++ thumb : cv <= cvcornersubpix.cpp Compile++ thumb : cv <= cvderiv.cpp Compile++ thumb : cv <= cvdistransform.cpp Compile++ thumb : cv <= cvdominants.cpp Compile++ thumb : cv <= cvemd.cpp Compile++ thumb : cv <= cvfeatureselect.cpp Compile++ thumb : cv <= cvfilter.cpp Compile++ thumb : cv <= cvfloodfill.cpp Compile++ thumb : cv <= cvfundam.cpp Compile++ thumb : cv <= cvgeometry.cpp Compile++ thumb : cv <= cvhaar.cpp Compile++ thumb : cv <= cvhistogram.cpp Compile++ thumb : cv <= cvhough.cpp Compile++ thumb : cv <= cvimgwarp.cpp Compile++ thumb : cv <= cvinpaint.cpp Compile++ thumb : cv <= cvkalman.cpp Compile++ thumb : cv <= cvlinefit.cpp Compile++ thumb : cv <= cvlkpyramid.cpp Compile++ thumb : cv <= cvmatchcontours.cpp Compile++ thumb : cv <= cvmoments.cpp Compile++ thumb : cv <= cvmorph.cpp Compile++ thumb : cv <= cvmotempl.cpp Compile++ thumb : cv <= cvoptflowbm.cpp Compile++ thumb : cv <= cvoptflowhs.cpp Compile++ thumb : cv <= cvoptflowlk.cpp Compile++ thumb : cv <= cvpgh.cpp Compile++ thumb : cv <= cvposit.cpp Compile++ thumb : cv <= cvprecomp.cpp Compile++ thumb : cv <= cvpyramids.cpp Compile++ thumb : cv <= cvpyrsegmentation.cpp Compile++ thumb : cv <= cvrotcalipers.cpp Compile++ thumb : cv <= cvsamplers.cpp Compile++ thumb : cv <= cvsegmentation.cpp Compile++ thumb : cv <= cvshapedescr.cpp Compile++ thumb : cv <= cvsmooth.cpp Compile++ thumb : cv <= cvsnakes.cpp Compile++ thumb : cv <= cvstereobm.cpp Compile++ thumb : cv <= cvstereogc.cpp Compile++ thumb : cv <= cvsubdivision2d.cpp Compile++ thumb : cv <= cvsumpixels.cpp Compile++ thumb : cv <= cvsurf.cpp Compile++ thumb : cv <= cvswitcher.cpp Compile++ thumb : cv <= cvtables.cpp Compile++ thumb : cv <= cvtemplmatch.cpp Compile++ thumb : cv <= cvthresh.cpp Compile++ thumb : cv <= cvundistort.cpp Compile++ thumb : cv <= cvutils.cpp StaticLibrary : libcv.a SharedLibrary : libopencv.so U:/flori/workspace/android-ndk-r5b/toolchains/arm-linux-androideabi-4.4.3/prebui lt/windows/bin/../lib/gcc/arm-linux-androideabi/4.4.3/../../../../arm-linux-andr oideabi/bin/ld.exe: cannot find -lcxcore collect2: ld returned 1 exit status make: *** [/cygdrive/u/flori/workspace/android/testOpenCV/obj/local/armeabi/libo pencv.so] Error 1 I am trying to compile it on a Windows system and with the newest NDK version... Does anybody have an idea what this linking error means and what I can to to have it work again? Would be great if anybody could help After getting the problem to work I found that there is another way of compiling OpenCV for Android, using the current version of OpenCV (instead of the 1.1 one from above) and the modified Android NDK from crystax, which supports STL and exceptions and therefore supports the newest OpenCV Version. All information on that can be found here: http://opencv.willowgarage.com/wiki/Android There it says to download the current svn trunk and the crystax-r4 android-ndk, as well as swig, which I did. I entered the folder, created the build directory, ran cmake and then built the static libs, which seemed to work. At least it successfully ran the make-command without errors. I now wanted to build the shared libraries so I entered the android-jni folder and ran 'make' again, but got this error: % make -j4 OPENCV_CONFIG = ../build/android-opencv.mk make clean-swig &&\ mkdir -p jni/gen &&\ mkdir -p src/com/opencv/jni &&\ swig -java -c++ -package "com.opencv.jni" \ -outdir src/com/opencv/jni \ -o jni/gen/android_cv_wrap.cpp jni/android-cv.i OPENCV_CONFIG = ../build/android-opencv.mk make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. rm -f jni/gen/android_cv_wrap.cpp make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi/libandroid-opencv.so] Error 2 make: *** Waiting for unfinished jobs.... make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi-v7a/libandroid-opencv.so] Error 2 Does anybody have an idea what this means and what I can do to build the shared libraries? ... Ok after having a look at the error message it came to me that it seems to have something missing in the build directory... but there wasn't even a build directory in the android folder so I created one, ran 'cmake' in there and 'make' again but get this error: Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/sgetrf.c Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/scopy.c Compile++ thumb: opencv_core <= /home/florian/android-opencv-willowgarage/modules/core/src/matrix.cpp cc1plus: error: /home/florian/android-opencv-willowgarage/android/../modules/index.rst/include: Not a directory make[3]: *** [/home/florian/android-opencv-willowgarage/android/build/obj/local/armeabi/objs/opencv_core/src/matrix.o] Error 1 make[3]: *** Waiting for unfinished jobs.... make[2]: *** [android-opencv] Error 2 make[1]: *** [CMakeFiles/ndk.dir/all] Error 2 make: *** [all] Error 2 Anybody know what this means?

    Read the article

  • How should I distribute a pre-built perl module, and what version of perl do I build for?

    - by Mike Ellery
    This is probably a multi-part question. Background: we have a native (c++) library that is part of our application and we have managed to use SWIG to generate a perl wrapper for this library. We'd now like to distribute this perl module as part of our application. My first question - how should I distribute this module? Is there a standard way to package pre-built perl modules? I know there is ppm for the ActiveState distro, but I also need to distribute this for linux systems. I'm not even sure what files are required to distribute, but I'm guessing it's the pm and so files, at a minimum. My next question - it looks like I might need to build my module project for each version of perl that I want to support. How do I know which perl versions I should build for? Are there any standard guidelines... or better yet, a way to build a package that will work with multiple versions of perl? Sorry if my questions make no sense - I'm fairly new to the compiled module aspects of perl. CLARIFICATION: the underlying compiled source is proprietary (closed source), so I can't just ship source code and the appropriate make artifacts for the package. Wish I could, but it's not going to happen in this case. Thus, I need a sane scheme for packaging prebuilt binary files for my module.

    Read the article

  • How to upgrade the project build in visual studio 2005 to visual studio 2008?

    - by Shailesh Jaiswal
    I have one OPC ( OLE for Process control ) server project which is developed into visual studio 2005. I want to run it in visual studio 2008. The coding for the OPC server project is done in VC++. I want to connect my OPC client to this OPC server. When I was opened the OPC server project which was build into visual studio 2005 into visual studio 2008 first time it was asking for conversion wizard. I gone through that wizard & successfully finished that wizard. But when I build ( by right clicking on the project & choosing build solution ) it is giving lots of error near about 64 errors. Most of the errors are like - fetal error C1083:Can not open type library file:'msxml4.dll':No such file or directory, fetal error LINK1181:can not open input file 'rpcndr.lib' , error C2051:case expression not constant. only these 3 types of errors in am getting. All these 3 errors are repeated in Error list & becoming bunch of 64 errors. Please provide me the solution for the above issue. Can you provide me any suusgestion or link or any way through whcih I can resolve the above issue?

    Read the article

  • Compiling cpp code in netbeans produce errors, how to solve it ?

    - by Rupertt Wind
    i use the netbeans with MinGW and MYSY make /debugger but when i compile a basic cpp code in it and run it it produces two erorrs this is the code runned and the output![alt text][1] box #include <iostream> void main() { cout << "Hello World!" << endl; cout << "Welcome to C++ Programming" << endl; } output is /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/d/Users/Home/Documents/NetBeansProjects/newApp' /usr/bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/newapp.exe make[2]: Entering directory `/d/Users/Home/Documents/NetBeansProjects/newApp' mkdir -p dist/Debug/MinGW-Windows g++.exe -o dist/Debug/MinGW-Windows/newapp build/Debug/MinGW-Windows/newmain.o build/Debug/MinGW-Windows/newfile.o build/Debug/MinGW-Windows/main.o build/Debug/MinGW-Windows/newfile.o: In function `main': D:/Users/Home/Documents/NetBeansProjects/newApp/newfile.cpp:5: multiple definition of `main' build/Debug/MinGW-Windows/newmain.o:D:/Users/Home/Documents/NetBeansProjects/newApp/newmain.c:15: first defined here build/Debug/MinGW-Windows/main.o: In function `main': D:/Users/Home/Documents/NetBeansProjects/newApp/main.cpp:13: multiple definition of `main' build/Debug/MinGW-Windows/newmain.o:D:/Users/Home/Documents/NetBeansProjects/newApp/newmain.c:15: first defined here collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/newapp.exe] Error 1 make[2]: Leaving directory `/d/Users/Home/Documents/NetBeansProjects/newApp' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/d/Users/Home/Documents/NetBeansProjects/newApp' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s) how can i solve this ?

    Read the article

  • Codesign returns exit code of 11 for xcode iphone project -- can't find documentation on exit code 1

    - by Larry Freeman
    Hi Everyone, I'm new to iPhone development. I am trying to get Xcode to run an application on a phone. The app works fine in the simulator. Initially I hit the error: The executable was signed with invalid entitlements I followed the steps here: http://stackoverflow.com/questions/1074546/the-executable-was-signed-with-invalid-entitlements But now I am getting an exit code of 11. I checked the documentation on codesign but I can't find any mention of an exit code 11 (http://gemma.apple.com/mac/library/documentation/Darwin/Reference/ManPages/man1/codesign.1.html) Below is the log I am getting. Any help is greatly appreciated. I am using iPhone OS 3.1.3. Build HubPages of project HubPages with configuration Debug CodeSign build/Debug-iphoneos/HubPages.app cd /Users/larryfreeman/src/hub/mobile/HubPages/build/iphone setenv IGNORE_CODESIGN_ALLOCATE_RADAR_7181968 /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/codesign_allocate setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /usr/bin/codesign -f -s "iPhone Developer: Larry Freeman (LT6G4W62Z2)" --resource-rules=/Users/larryfreeman/src/hub/mobile/HubPages/build/iphone/build/Debug-iphoneos/HubPages.app/Entitlements.plist --entitlements /Users/larryfreeman/src/hub/mobile/HubPages/build/iphone/build/HubPages.build/Debug-iphoneos/HubPages.build/HubPages.xcent /Users/larryfreeman/src/hub/mobile/HubPages/build/iphone/build/Debug-iphoneos/HubPages.app Command /usr/bin/codesign failed with exit code 11 Thanks! -Larry

    Read the article

  • How do I reward my developers for the little things they get right?

    - by Nat
    I am in a tech lead role and my developers get stuff right most of the time. How do I communicate to them thier value to me? (I.e. they have value because I do not have to go through and point out mistakes which means I do not have to watch them like a hawk which frees me to do more useful things). In summary For doing the mundane well on a day to day basis, it is good to recognise the developers effort verbally to them. An honest thankyou that mentions the specific behaviour and its positive repercussions to you personally will be well received, adjust the language to suite each individual. (Note that other developers within earshot may also respond to this by increasing their efforts in this specific activity.) Other things that should be done regularly are: Team drinks In many cultures this is an entirely worthy way of giving the team some time to socialise and relax. Be sure that you do not exclude people who do not drink or are not keen on pub culture. Shared meals are another option. Formal written (email) acknowledgment and praise to senior managers of the teams efforts and successes. (Note that acknowledging individuals alone may damage team spirit) Work the hours you expect your team to do. If they absolutely must work late for a deadline, be there in support Go to bat for the team. Refuse to let them be forced to work long periods of overtime without compensation. Protect them from level politics and stress. Give your team the best equipment you can afford. Good tools show respect and improve productivity. Small or large team rewards where appropriate can consist of many interesting activities/ items. If it allows the team to get together in a fun and even lightly competitive manner it will work (foosball table, go-karting, darts board, video game console etc). Don’t forget to listen to what the team wants, each team will have different ideas. Ensure they are getting a fair deal financially from the company. While different people may have different expectations of their pay, someone being paid unfairly will rot morale for the entire team

    Read the article

  • How to integrate junit/pmd/findbugs report into hudson build email?

    - by fei
    my team is looking into using hudson as our continuous integration software, 1 problem that we try to figure out is to integrate the reports of junit/pmd/findbugs etc into the build email that get sent to the team. the graph/reports on the dashboard are nice and all, but people usually want to just read the email and not clicking the links. i tried to use the ext-email plugin, but that doesn't provide much help related to this. is there any way i can get those info into the build email? Thanks!

    Read the article

  • How can I setup ANT with Subversion and ColdFusion Builder (eclipse) to check out a local build to w

    - by Smooth Operator
    I am not sure if there's an answer for this already -- couldn't find one for this (hopefully common) setup: I recently converted one of my ColdFusion projects to deploy via ANT. I have a local ant script that instructs a remote server to check out the code, and run the application's specific build file, remotely on the server. I have a few endpoints: Live - production (on the production server) Staging - on the production server, different datasource, etc. dev - on the local box. What I have run into it seems is a simple and common problem. I now need ANT to create any build, even locally. Fine, created a local endpoint and it configures for my box. Issue? How do I get it to show up as a project (automatically if possible) in Eclipse/ColdFusion builder. What I envision is instead of checking out a branch via the subversion plugin in CFBuilder/Eclipse, I now use ANT to do that for me. Since I use ColdFusion Builder (Eclipse + Adobe's plugin), I have all of eclipse's tools and plugins available to solve the problem of : how can I best call ANT from within Eclipse/ColdFusion Builder, to setup the local build as a project that I can develop and work on? I think when I check the code back in from the local box, I'd have to be sure not to check in any files with local config paths, etc. I hope this is a detailed and clear enough explanation, if not, please ask. Thanks in advance!

    Read the article

  • Is there only one distribution certificate per team leader or one per application?

    - by dbonneville
    I'm confused by the dev and dist cert. I got one app in the store, but I named my certs after my first app. Was this a mistake? I'm ready to go on my second app. But XCode is selected the dist cert with the old app name on it. It built without error. Though I named it wrong, will it still work? XCode is automatically picking this cert for me. Is this right? You need a new app ID for each app so you can 1) put in plist, 2) put in code signing section on Build tab of Target info, but you don't need a new dev and dist cert for each new app. Therefore is this right too? For each app you develop, you only need your original dev and dist cert, but a new app ID for each app. This is so obscure! I wish apple had done a better job!

    Read the article

  • The best way to build iPhone/iPad static libraries?

    - by Derek Clarkson
    Hi everyone, I have a couple of static Phone/iPad libraries I an working on. The problem I am looking for advise on is the best way to package the libraries. My objective is to make it easy to use the libraries in other projects and include the correct one in a build with minimal problems. To make it more interesting I currently build 4 versions of each library as follows armv6/armv7 release (devices) i386 release (simulator) armv6/armv7 debug (devices) i386 debug (simulator) The difference between the release and debug versions is that the debug versions contain a lot of NSLog(...) code which enables people to see whats going on internally as an aid to debugging. Currently when I build the whole projects I arrange the libraries into two directories like this: release lib-device.a lib-simulator.a debug lib-device.a lib-simulator.a This works ok except that when include in projects, both paths are added to the library search path and switching a target from one to the other is a pain. Or alternatively I end up with two targets. The alternative I am thinking of is to change the directories like this: release device lib.a simulator lib.a debug device lib.a simulator lib.a In playing with XCode is appears that all xcode uses the lbrary references of a project for is to get the name of the library file, which it then looks up in the library paths. Thus by parameterising the library path with the current built type and target device, I can effectively auto switch. What do you guys think? Is there a better way to do this? ciao Derek

    Read the article

  • Read attributes of MSBuild custom tasks via events in the Logger

    - by gt
    I am trying to write a MSBuild logger module which logs information when receiving TaskStarted events about the Task and its parameters. The build is run with the command: MSBuild.exe /logger:MyLogger.dll build.xml Within the build.xml is a sequence of tasks, most of which have been custom written to compile a (C++ or C#) solution, and are accessed with the following custom Task: <DoCompile Desc="Building MyProject 1" Param1="$(Param1Value)" /> <DoCompile Desc="Building MyProject 2" Param1="$(Param1Value)" /> <!-- etc --> The custom build task DoCompile is defined as: public class DoCompile : Microsoft.Build.Utilities.Task { [Required] public string Description { set { _description = value; } } // ... more code here ... } Whilst the build is running, as each task starts, the logger module receives IEventSource.TaskStarted events, subscribed to as follows: public class MyLogger : Microsoft.Build.Utilities.Logger { public override void Initialize(Microsoft.Build.Framework.IEventSource eventSource) { eventSource.TaskStarted += taskStarted; } private void taskStarted(object sender, Microsoft.Build.Framework.TaskStartedEventArgs e) { // write e.TaskName, attributes and e.Timestamp to log file } } The problem I have is that in the taskStarted() method above, I want to be able to access the attributes of the task for which the event was fired. I only have access to the logger code and cannot change either the build.xml or the custom build tasks. Can anyone suggest a way I can do this?

    Read the article

  • How to generate build no with SVN revision no & Maven buildNumber plugin.

    - by Binit jha
    Hi, I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry. here is the our pom details: <modelVersion>4.0.0</modelVersion> <groupId>com.hp.cloudprint</groupId> <artifactId>testutils</artifactId> <name>testutils</name> <version>6.3.rel.${buildNumber}</version> <description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description> <dependencies> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>false</doCheck> <doUpdate>true</doUpdate> <getRevisionOnlyOnce>true</getRevisionOnlyOnce> </configuration> </plugin> <scm> <connection>scm:svn:https://acn-platform</connection> <developerConnection>scm:svn:https://abc-platform/trunk</developerConnection> </scm>. </project> Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar [INFO] [install:install] [INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar** [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL Target generated correct jar. testutil-6.3.rel.2297.jar* Thanks in advance Binit

    Read the article

  • Visual Studio 2008 Unit test does not pick up code changes unless I build the entire solution

    - by Orion Edwards
    Here's the scenario: Change my code: Change my unit test for that code With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The visual studio "Output" window indicates that the code dll and the test dll both successfully build (in that order) The problem is however, that the unit test does not use the latest version of the dll which it has just built. Instead, it uses the previously built dll (which doesn't have the updated code in it), so the test fails. When adding a new method, this results in a MethodNotImplementedException, and when adding a class, it results in a TypeLoadException, both because the unit test thinks the new code is there, and it isn't!. If I'm just updating an existing method, then the test just fails due to incorrect results. I can 'work around' the problem by doing this Change my code: Change my unit test for that code Invoke VS2008's 'Build Solution' command With the cursor inside the unit test class/method, invoke VS2008's "Run tests in current context" command The problem is that doing a full build solution (even though nothing has changed) takes upwards of 30 seconds, as I have approx 50 C# projects, and VS2008 is not smart enough to realize that only 2 of them need to be looked at. Having to wait 30 seconds just to change 1 line of code and re-run a unit test is abysmal. Is there anything I can do to fix this? None of my code is in the GAC or anything funny like that, it's just ordinary old dll's (buiding against .NET 3.5SP1 on a win7/64bit machine) Please help!

    Read the article

  • What is an elegant way to set up a leiningen project that requires different dependencies based on the build platform?

    - by Savanni D'Gerinel
    In order to do some multi-platform GUI development, I have just switched from GTK + Clojure (because it looks like the Java bindings for GTK never got ported to Windows) to SWT + Clojure. So far, so good in that I have gotten an uberjar built for Linux. The catch, though, is that I want to build an uberjar for Windows and I am trying to figure out a clean way to manage the project.clj file. At first, I thought I would set the classpath to point to the SWT libraries and then build the uberjar. This would require that I set a classpath to the SWT libraries before running the jar, but I would likely need a launcher script, anyway. However, leiningen seems to ignore the classpath in this instance because it always reports that Currently, project.clj looks like this for me: (defproject alyra.mana-punk/character "1.0.0-SNAPSHOT" :description "FIXME: write" :dependencies [[org.clojure/clojure "1.2.0"] [org.clojure/clojure-contrib "1.2.0"] [org.eclipse/swt-gtk-linux-x86 "3.5.2"]] :main alyra.mana-punk.character.core) The relevant line is the org.eclipse/swt-gtk-linux-x86 line. If I want to make an uberjar for Windows, I have to depend on org.eclipse/swt-win32-win32-x86, and another one for x86-64, and so on and so forth. My current solution is to simply create a separate branch for each build environment with a different project.clj. This seems kinda like using a semi to deliver a single gallon of milk, but I am using bazaar for version control, so branching and repeated integrations are easy. Maybe the better way is to have a project.linux.clj, project.win32.clj, etc, but I do not see any way to tell leiningen which project descriptor to use. What are other (preferably more elegant) ways to set up such an environment?

    Read the article

  • MinGW-gcc PCH not speeding up wxWidget build times. Is my setup correct?

    - by Victor T.
    Hi all, I've been building wxMSW 2.8.11 with the latest stable release of mingw-gcc 4.5.1 and I'm trying to see if the build could be sped up using precompiled headers. My initial attempts at this doesn't seem to work. I basically followed the given instructions here. I created a wxprec.h precompiled header with the following: g++ -O2 -mthreads -DHAVE_W32API_H -D__WXMSW__ -DNDEBUG -D_UNICODE -I..\..\lib\gcc_dll\mswu -I..\..\include -W -Wall -DWXBUILDING -I..\.. \src\tiff -I..\..\src\jpeg -I..\..\src\png -I..\..\src\zlib -I..\..\src \regex -I..\..\src\expat\lib -DwxUSE_BASE=1 -DWXMAKINGDLL -Wno-ctor- dtor-privacy ../../include/wx/wxprec.h That does successfully create a wxprec.h.gch that's about ~1.6meg in size. Now I proceed to build wxmsw using the follow make command from cmd.exe shell: mingw32-make -f makefile.gcc While, the build does succeed I noticed no speedup whatsoever then if pch wasn't used. To make sure gcc was actually using the pch I added -H in the config.gcc and did another rebuild. Indeed, the outputted include list does show a '!' next to the wxprec.h so gcc is supposely using it. What's the reason for pch not working? Did I setup the precompiled headers correctly or am I missing a step? Just for reference comparison, here's the compile times I get when building wxmsw 2.8.11 with the other compilers(visual studio 2010 and C++ Builder 2007). The time savings is pretty significant. | | release, pch | release, nopch | debug, nopch ------------------------------------------------------- | gcc451 | 8min 33sec | 8min 17sec | 8min 49sec | msc_1600 | 2min 23sec | 13min 11sec | -- | bcc593 | 0min 59sec | 2min 29sec | -- Thanks

    Read the article

  • Run AIR Debug Launcher (ADL) without a GUI for continuous build on Hudson CI and xvnc plugin

    - by jensendarren
    I cannot seem to get a headless FlexUnit build to run in Hudson CI + xvnc plugin on Ubuntu 9.04. Here is what I have tried: compiled using -use-network=false switch added Global Flash Player Trust file /var/lib/hudson/.macromedia/Flash_Player/#Security/FlashPlayerTrust/security.cfg (with the content /) commented out the last line "twm &" from /var/lib/hudson/.vnc/xstartup

    Read the article

  • Glassfish 3: How do I get and use a developers build so I can navigate a stack trace including Glas

    - by Thorbjørn Ravn Andersen
    I am migrating a JSF 1.1 application to JEE 6 Web profile, and doing it in steps. I am in the process of moving from JSP with JSF 1.1 to Facelets under JSF 1.2 using the jsf-facelets.jar for JSF 1.2, and received an "interesting" stack trace when trying to lookup a key in a Map using a "{Bean.foo.map.key}" where the stacktrace complained about "key" not being a valid integer. (After code introspection I am workarounding it using a number as the key). That bug is not what this question is about. In such a situation it is essential to be able to navigate the source of every line in the stack trace. In Eclipse I normally attach a source jar to every jar on the build path, but in this particular case the Glassfish server adapter creates a library automatically containing the jars. Also there is to my knowledge no debug build of Glassfish where sources are included in the bundle. Glassfish is a non-trivial Maven project, and a bit picky too. I am not very familiar with maven, but have managed to checkout the code from Subversion and build it for the 3.0 tag according to http://wiki.glassfish.java.net/Wiki.jsp?page=V3FullBuildInstructions#section-V3FullBuildInstructions-CheckoutTheWorkspace - it appears to be the code corresponding to the official released 3.0 version. After finishing the "mvn -U install" part, I have then tried to create Eclipse projects by first using "mvn -DdownloadSources=true eclipse:eclipse" and then import them in Eclipse JEE 3.5.2 and specifying the M2_REPO variable but many of the projects still have compilation errors, and I cannot locate any instructions from Oracle about how to do this. I'd appreciate some help in just getting a functional IDE workspace reflecting the 3.0 version of Glassfish. I have Eclipse 3.5.2, Netbeans 6.8 and 6.9 beta, and IntelliJ IDEA 9, and Linux/Windows/OS X do do it on.

    Read the article

  • Is it feasible to build a home Plug PC cluster?

    - by Zubair
    Is it possible to build a performant cluster for the home based on something like a plug pc: http://www.marvell.com/platforms/plug_computer/ I realise the Marvell Plug PC is not very powerful but in a couple of years I could imagine them starting to get dual cores and more RAM

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >