Search Results

Search found 4276 results on 172 pages for 'integration certificati'.

Page 34/172 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Trying to setup phpUnderControl but missing phpcb

    - by blcArmadillo
    Hi, I'm trying to setup phpUnderControl on Ubuntu 10.04 following this tutorial http://techportal.ibuildings.com/2009/03/03/getting-started-with-phpundercontrol/. I get to the part where it says to type in sudo phpuc example /opt/cruisecontrol but recieve the following error: Missing cli tool 'phpcb', check the PATH variable. I did a little googling but can't seem to find reference of a phpcb anywhere. Does anyone know what this is and where I might get it? Thanks.

    Read the article

  • Jenkins shows same information for all projects

    - by SuperCabbage
    I've been using Jenkins for the last month or so and what started out as a small issue has gotten worse and worse. I have 10 projects in Jenkins, all polling from different Git repos and building to different environments but they all show the same details on the dashboard. I can still build the projects, but I have to manually enter the URL to see any console output etc. I'm running 1.536 under Ubuntu 12.04, there's not much in the logs other than the following; Oct 22, 2013 2:21:19 PM WARNING jenkins.model.lazy.AbstractLazyLoadRunMap search JENKINS-15652 Assertion error #1: failing to load /data/builds #20 EXACT: lo=23,hi=9,size=23,size2=23 – Any ideas?

    Read the article

  • Is there a way to "redirect" a click on a URL in a VirtualBox guest to open in the host OS browser?

    - by Graeme Donaldson
    I'm using VirtualBox OSE on Ubuntu 10.04. I have a Windows 7 guest VM which I use almost exclusively for MS Outlook to access my Exchange mailbox. If I click a URL in Outlook it obviously opens in IE in the guest VM, is there any way to have it perform a redirect of some sort? If I click a URL inside the VM, I want it to load in my default browser in the Ubuntu host.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • Jenkins CI fails to initialise

    - by jackweirdy
    I've just installed Jenkins-CI on Ubuntu 11.10 as according to the instructions found here, However the service fails to start. The error log shows this: Running from: /usr/share/jenkins/jenkins.war 10 Jun 2012 16:24:06 winstone.Logger logInternal INFO: Beginning extraction from war file 10 Jun 2012 16:24:10 hudson.WebAppMain contextInitialized SEVERE: Failed to initialize Jenkins java.lang.ExceptionInInitializerError The entire error log can be found on this pastebin I've tried looking for similar errors but I can't find anything. Any ideas would be appreciated.

    Read the article

  • Any web interface to show deployments history with mcollective?

    - by Jason
    I'm using mcollective & puppet for deployments. I would like to know if there is any user interface which allows me to choose a specific package/version and deploy and any user interface i can already use which allows me to show the deployments history and their status. (I saw that glu has a nice user interface i'm looking for something like it, http://linkedin.github.com/glu/docs/latest/html/tutorial.html ) I wondered if i could use glu (so that i can get their deployment history gui if they have a good one...) with mcollective but from what i understand they are parallel frameworks.

    Read the article

  • Execute build task in Hudson with root privilages

    - by jensendarren
    I have a build script which executes apt-get and therefore requires root privileges. What is the best way to run this script in Hudson? Currently the only solution I have found that works is to add an entry to the sudoers file for the user hudson like so: hudson ALL=(ALL) NOPASSWD:ALL However, although my build script now runs without error in Hudson, I am not entirely comfortable with this solution. Is there a better way?

    Read the article

  • Using the promote builds plugin to tag subversion repository in jenkins

    - by mark
    We have a task which builds based on data from 4 different SVN repositories. I want to allow QA promote a build, so that the revisions participating in the build are tagged with the build number and some optional label. I have encountered the following problem - the promoted build may not be the most recent build. How do I know the SVN revision of each of the four repositories used during that build? I know that each build has this information in the revision.txt and build.xml files associated with the build, but how does it become available in the context of promotion? Thanks. P.S. Asked here before, but did not get a satisfying answer.

    Read the article

  • Tagging does not work with the Subversion plugin.

    - by mark
    I have exactly the same problem as the fellow from this post - http://jenkins.361315.n4.nabble.com/Tag-this-build-not-working-subversion-td384218.html, except that I use build 1.413 Unfortunately, the post does not provide any workarounds except downgrading to 1.310 (from 1.315) I would gladly provide the logs, if I knew the logger names. Please, help. P.S. I have posted this issue both on jenkins issues site - https://issues.jenkins-ci.org/browse/JENKINS-9961 and in the respective google group - https://groups.google.com/d/topic/jenkinsci-users/4UVKFxXA9Jo/overview. To no avail. So, this site is my last hope - thanks to all in advance. EDIT Upgraded to 1.417 - still tagging does nothing.

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • Code-First Database Creation During TFS 2010 CI Build

    - by jedimindtrickster
    I would like to automate code-first database generation during the automated CI build of a web project in Team Foundation Server 2010. When run locally the tests create a code-first database specified by the connection string in the app.config of the tests project. How do I configure the TFS Build Configuration to mimic this behaviour on the TFS build server? Edit The problem, it turns out, was that the TFS build server was successfully running the test which was using the default connection string in the app.config which pointed to the local SQL Server, not where I expected it. The solution was to use SlowCheetah on the TFS server as a means to transform the App.config file using the QA transform as per this blog article.

    Read the article

  • IntegrationTests - A potentially dangerous Request.Path value was detected from the client

    - by stacker
    I get this error: A potentially dangerous Request.Path value was detected from the client (?). when this URI: http://www.site.com/%3f. How can I write a integration test for this type of errors. I want to test against all this erros: A potentially dangerous Request.Path value was detected from the client A potentially dangerous Request.Cookies value was detected from the client A potentially dangerous Request.Form value was detected from the client A potentially dangerous Request.QueryString value was detected from the client

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

  • Bash scripting know the result of a command.

    - by Fork
    Hi, I am writing a bash script to run an integration test of a tool I am writing. Basically I run the application with a set of inputs and compare the results with expected values using the diff command line tool. It's working, but I would like to enhance it by knowing the result of the diff command and print "SUCCESS" or "FAIL" depending on the result of the diff. How can I do it?

    Read the article

  • Convert p4 move to p4 integrate

    - by pmarden
    Is there a simple way to convert a (large) list of p4 moves to p4 integrates? There are a lot of pending modifications to the moved files, so just reverting and instead integrating isn't an option. Perforce won't let you just revert the deleted file (which would leave the desired integration behind).

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • Javascript Buildmanagement - "Must have" tools?

    - by lajuette
    Are there any must have tools for Java Script (RIA) development like maven, jUnit, Emma, link4j etc. for Javascript? What is the best way to set up a continous integration system for a bigger application or framework? How do projects like jQuery test their code? How to manage dependencies and different project configurations? tools i know so far: javascript-maven-tools (is maven the right choice?) jslint yuicompressor sprockets (found it 5 mins ago) jsunit selenium

    Read the article

  • How do I configure an extreme feedback device to notify CI build status?

    - by Gishu
    Trying to save the next guy/gal some trouble in finding out what is needed to setup lava lamps or traffic lights or what have you (the term I believe is eXtreme Feedback Devices) as a BIG VISIBLE INDICATOR of your continuous integration build status. Ensure your post includes... (and please don't mess this question up with imaginative responses.. although it may be insanely funny at the point of conception) the XFD what 'helper' hardware is needed software that you managed to hook it up with detailed instructions on how to set it up

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • Avoid writing SQL queries altogether in SSIS

    - by Jonn
    Working on a Data Warehouse project, the guy that gave us the tutorial advised that we stick to using SQL queries over defining a lot of data flow transformations, citing points like it'll consume a lot of memory on the ETL box so we'd rather leave the processing to the DB box. Is this really advisable? Where's the balance between relying on GUI tools over executing a bunch of SQL scripts on your Integration package? And honestly, I'd like to avoid writing SQL queries as much as I can.

    Read the article

  • .NET Automated Build Server Software

    - by KevinDeus
    What good .NET Continous Integration and Automated Build and Deployment Software is out there? We have been using CruiseControl.NET but it is really starting to get on our nerves with the amount of maintenance it needs. We're looking for something that virtually anybody can manage, and it would also really be good to not have to write a NAnt build script. We use Subversion for Source Controll

    Read the article

  • How to Sync CI (Hudson) Activity into an existing automated Build Process (phing, svn)?

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken - IE deletion) the number of branches we have can easily grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', which is then tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention & (in advance) for your help!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >