Search Results

Search found 1098 results on 44 pages for 'continuous'.

Page 6/44 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Project life cycle management - Maven vs 'manual' approach

    - by jb10210
    I have a question concerning the life cycle management of a/multiple project(s), more specific to the advantages/disadvantages of using technologies such as Maven. Currently we work in a continuous-integration environment but lots of things still need to be manually performed (dependency management, deploying, setting up documentation, generating stats, ...). My impression is that this approach often leads to errors, miscommunications or things just are forgotten. I know and have used Maven in the past but in smaller environments and I was always really enthusiastic about it. But I was wondering if someone could share some insights, experiences, pros, contras, ... about the use of Maven (or similar technology) in larger environments and for multiple projects. I would like to use the suggestions made here to start the debate about moving to the next level in project management!

    Read the article

  • How should I set up UDK with Git and CruiseControl?

    - by Martin Sojka
    For a new project in UDK, I'd like to set up a Git repository for version control and a CruiseControl.NET-based continuous integration solution. The good news is that he first part seems easy enough and CruiseControl.NET can work off Git repositories. The bad news is that according to my searches, nobody has ever tried to do this. Ideally, I'm looking for a step-by-step guide on how to set up such a development environment assuming more than one development computer, one central repository for the "master" branch, and one machine for building and packaging the binaries via CruiseControl.NET. Related: Version control system for game development with UDK? Options for UDK and version control repositories? CruiseControl.NET and Git

    Read the article

  • iOS: Using Jenkins for nightly internal builds (TestFlight), plus frequent client builds

    - by Amy Worrall
    I'm an iOS dev, working for a small agency. I'm currently on a few smaller projects where I'm the only developer. We recently acquired a Jenkins server, but each project is left to fend for themselves as for how to use it. I'd like to use it for making and distributing builds. My ideal would be: Every commit is built as a single IPA that is placed in a HTTP-accessible location. (It only needs to keep the latest one, otherwise the disk would fill up — some of our apps are 500MB or more.) Once a day it makes a build, signs it with our internal provisioning profile, tacks a build number onto the end of the version number, and sends it to our internal TestFlight team. When manually prompted, it builds the latest commit, gives it a manually specified version number, signs it with the client's provisioning profile and sends it to TestFlight. I'm pretty new to Jenkins. The developer who set up the server is running it on one of our projects, so I know it has the right stuff installed to do Xcode builds. I believe he's only using it to run unit tests though, not to do any of the code signing, IPA creation or TestFlight stuff. So my questions: I've listed three distinct kinds of build. How does Jenkins cope with that? I see there's a "build triggers" section in the config for a Jenkins project, but it doesn't seem to mention different types of build. Should I just set up multiple Jenkins projects, called "App X (continuous)", "App X (nightly)" and "App X (client)"? How do I specify the provisioning profile through Jenkins? If there isn't a way, I guess I could make different configurations in the Xcode project… Has anyone else used Jenkins to actually do the release (i.e. build and push to TestFlight) of beta builds of their apps? Is it a good idea? Or should I continue just doing it manually?

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • PostgreSQL continuous archiving not running archive_command

    - by Whatsit
    I've been trying to set up continuous archiving for a simple, test PostgreSQL 9.0 database, as per the documentation. In postgres.conf I've set: wal_level = archive archive_mode = on archive_command = 'touch /home/myusername/backup/testtouch' archive_timeout = 30s ...and restarted PostgreSQL. The file listed by touch never appears. I can manually run the touch command and it works as expected. If I try to create a backup, it waits forever for the archive_command. In psql; postgres=# SELECT pg_start_backup('touchtest'); pg_start_backup ----------------- 0/14000020 (1 row) postgres=# SELECT pg_stop_backup(); NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived (60 seconds elapsed) HINT: Check that your archive_command is executing properly. pg_stop_backup can be cancelled safely, but the database backup will not be usable without all the WAL segments. What would cause this? How can I troubleshoot it? Additional info: Running on CentOS 5.4. PostgreSQL 9.0.2 installed as root.

    Read the article

  • TeamCity for continuous integration with Visual Studio 2010 solutions/projects

    - by JeffryEngberg
    I am running TeamCity build 5.1.1 on a virtual machine that also hosts our SVN environment. A team I support has recently made the move from Visual Studio 2008/Silverlight 3.0 to Visual Studio 2010/Silverlight 4.0 and when investigating how to do continuous integration with Visual Studio 2010 solutions/projects, it is not as cut and dried as it appeared to be in Visual Studio 2008. Previously I was using Web Deployment Projects and targeting different Release Configurations in TeamCity, which would use the Web Deployment Project to package/deploy the code to our various environments. However when checking out the new Publish ability in Visual Studio 2010 I cannot find a way to specify which location to deploy to. Does everything need to be done in MSBuild now (in the solution file or maybe the Web project file?). If anyone has any examples of how they've done Continuous Integration using TeamCity and Visual Studio 2010, it would be greatly appreciated as I am coming up blank at the moment.

    Read the article

  • SQL - Finding continuous entries of a given size.

    - by ByteMR
    I am working on a system for reserving seats. A user inputs how many seats they wish to reserve and the database will return a set of suggested seats that are not previously reserved that matches the number of seats being reserved. For instance if I had the table: SeatID | Reserved ----------------- 1 | false 2 | true 3 | false 4 | false 5 | false 6 | true 7 | true 8 | false 9 | false 10 | true And the user inputs that they wish to reserve 2 seats, I would expect the query to return that seats (3, 4), (4, 5), and (8, 9) are not reserved and match the given number of input seats. Seats are organized into sections and rows. Continuous seats must be in the same row. How would I go about structuring this query to work in such a way that it finds all available continuous seats that match the given input?

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • How do I set up pairing email addresses?

    - by James A. Rosen
    Our team uses the Ruby gem hitch to manage pairing. You set it up with a group email address (e.g. [email protected]) and then tell it who is pairing: $ hitch james tiffany Hitch then sets your Git author configuration so that our commits look like commit 629dbd4739eaa91a720dd432c7a8e6e1a511cb2d Author: James and Tiffany <[email protected]> Date: Thu Oct 31 13:59:05 2013 -0700 Unfortunately, we've only been able to come up with two options: [email protected] doesn't exist. The downside is that if Travis CI tries to notify us that we broke the build, we don't see it. [email protected] does exist and forwards to all the developers. Now the downside is that everyone gets spammed with every broken build by every pair. We have too many possible pair to do any of the following: set up actual [email protected] email addresses or groups (n^2 email addresses) set up forwarding rules for [email protected] (n^2 forwarding rules) set up forwarding rules for [email protected] (n forwarding rules for each of n developers) Does anyone have a system that works for them?

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • How to attach WAR file in email from jenkins

    - by birdy
    We have a case where a developer needs to access the last successfully built WAR file from jenkins. However, they can't access the jenkins server. I'd like to configure jenkins such that on every successful build, jenkins sends the WAR file to this user. I've installed the ext-email plugin and it seems to be working fine. Emails are being received along with the build.log. However, the WAR file isn't being received. The WAR file lives on this path in the server: /var/lib/jenkins/workspace/Ourproject/dist/our.war So I configured it under Post build actions like this: The problem is that emails are sent but the WAR file isn't being attached. Do I need to do something else?

    Read the article

  • Configure Jenkins and Tomcat using Puppet on Vagrant

    - by ex3v
    I'm playing with setting up my first Spring + jenkins + Tomcat CI dev environment. For now it's just a test/fun phase, but in the near future I'll be starting new project with my coworkers. That's the reason that I want development environment virtualized and exactly te same on every development machine, as well as on production server. I choosen to use Vagrant and to try to write puppet scripts that not only install everything, but also configure everything so each of us will have the same jenkins plugins, same jenkins and tomcat login and password, and literally after calling vagrant up we are ready to work. What I managed to do so far is installation of stuff needed and port forwarding. My vagrantfile looks like this (comments stripped): VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :forwarded_port, guest: 80, host: 8090 config.vm.network :forwarded_port, guest: 8080, host: 8091 config.vm.network :private_network, ip: "192.168.33.10" config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.options = ['--verbose'] end end And this is my puppet file: Exec { path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ] } class system-update { exec { 'apt-get update': command => 'apt-get update', } $sysPackages = [ "build-essential" ] package { $sysPackages: ensure => "installed", require => Exec['apt-get update'], } } class tomcat { package { "tomcat": ensure => present, require => Class["system-update"], } service { "tomcat": ensure => "running", require => Package["tomcat"], } } class jenkins { package { "jenkins": ensure => present, require => Class["system-update"], } service { "jenkins": ensure => "running", require => Package["jenkins"], } } include system-update include tomcat include jenkins Now, when I hit vagrant provision and go to http://localhost:8091/ I can see jenkins running, so above script works good. Next step is configurating jenkins and tomcat by extending above puppet scripts. I'm pretty green when it comes to CI. After wandering around web I've found few tutorials about jenkins configuration (here's one of them). I really want to move configuration presented in this tutorial to puppet file, so when I spread my vagrantfile and puppet file between my coworkers, I will be sure that everyone has exactly te same setup. Unfortunately I'm also green about using puppet, I don't know how to do this. Any help will be apreciated.

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Databases and the CI server

    - by mlk
    I have a CI server (Hudson) which merrily builds, runs unit tests and deploys to the development environment but I'd now like to get it running the integration tests. The integration tests will hit a database and that database will be consistently being changed to contain the data relevant to the test in question. This however leads to a problem - how do I make sure the database is not being splatted with data for one test and then that data being override by a second project before the first set of tests complete? I am current using the "hope" method, which is not working out too badly at the moment, but mostly due to the fact that we only have a small number of integration tests set up on CI. As I see it I have the following options: Test-local (in memory) databases I'm not sure if any in-memory databases handle all the scaryness of Oracles triggers and packages etc, and anything less I don't feel would be a worth while test. CI Executor-local databasesA fair amount of work would be needed to set this up and keep 'em up to date, but defiantly an option (most of the work is already done to keep the current CI database up-to-date). Single "integration test" executorLikely the easiest to implement, but would mean the integration tests could fall quite far behind. Locking the database (or set of tables) I'm sure I've missed some ways (please add them). How do you run database-based integration tests on the CI server? What issues have you had and what method do you recommend? (Note: While I use Hudson, I'm happy to accept answers for any CI server, the ideas I'm sure will be portable, even if the details are not). Cheers,      Mlk

    Read the article

  • In which cases build artifacts will be different in different environments

    - by Sundeep
    We are working on automation of deployment using Jenkins. We have different environments - DEV, UAT, PROD. In SVN, we are tagging each release and placing same binaries in DEV, UAT, PROD. The artifacts already contains config files w.r.t each environment but I am not understanding why we are storing binaries in environment folder again. Are there any scenarios where deployment might be different for different environments.

    Read the article

  • Configure Jenkins and Tomcat using Puppet

    - by ex3v
    I'm trying to setup Spring dev environment (jenkins, tomcat) on Vagrant. What I really want to achieve is to limit config to only Puppet scripts, so I can share it with my colleagues and work together on the same environment. So far I managed to set up simple scripts to install jenkins, tomcat and so on, they work fine. What about jenkins configuration though? I'm pretty green in jenkins usage and configuration, not sure if I'm doing it the right way... I found this article and I want to migrate whole setup described in it to Puppet. Any ideas? Thanks.

    Read the article

  • Anyone used Jenkins with Sparkle (the cocoa app update framework)?

    - by Amy Worrall
    Pushing new Sparkle releases of our internal apps is a pain. I have to make the build, make the release notes file, sign the .zip with the private key, and add a new entry to the appcast file tying everything together. I'd love it if Jenkins could help: use the commit messages for the release notes, and automatically do the rest of it. Should I be looking at writing a new Jenkins plugin, or using shell scripting, or is there something already that will do what I want? (A quick Google didn't find anything.) I'm new to Jenkins, and am still trying to get a feel for what it can do. Do any other maintainers of Cocoa apps have any tips?

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >