Search Results

Search found 7122 results on 285 pages for 'continuous forms'.

Page 22/285 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • How do I set up pairing email addresses?

    - by James A. Rosen
    Our team uses the Ruby gem hitch to manage pairing. You set it up with a group email address (e.g. [email protected]) and then tell it who is pairing: $ hitch james tiffany Hitch then sets your Git author configuration so that our commits look like commit 629dbd4739eaa91a720dd432c7a8e6e1a511cb2d Author: James and Tiffany <[email protected]> Date: Thu Oct 31 13:59:05 2013 -0700 Unfortunately, we've only been able to come up with two options: [email protected] doesn't exist. The downside is that if Travis CI tries to notify us that we broke the build, we don't see it. [email protected] does exist and forwards to all the developers. Now the downside is that everyone gets spammed with every broken build by every pair. We have too many possible pair to do any of the following: set up actual [email protected] email addresses or groups (n^2 email addresses) set up forwarding rules for [email protected] (n^2 forwarding rules) set up forwarding rules for [email protected] (n forwarding rules for each of n developers) Does anyone have a system that works for them?

    Read the article

  • Branching and Merging with TortoiseSVN

    - by capgpilk
    For this example I am using Visual Studio 2010, TortoiseSVN 1.6.6, Subversion 1.6.6 and AnkhSVN 2.1.7819.411, so if you are using different versions, some of these screen shots may differ. This is assuming you have your code checked in to the trunk directory and have a standard SVN structure of trunk, branches and tags. There are a number of developers who prefer to develop solely in a branch and never touch the trunk, but the process is generally the same and you may be on a small team and prefer to work in the trunk and branch occasionally. There are three steps to successful branching. First you branch, then when you are ready you need to reintegrate any changes that other developers may have made to the trunk in to your branch. Then finally when your branch and the trunk are in sync, you merge it back in to the trunk. Branch Right click project root in Windows Explorer > TortoiseSVN > Branch/Tag Enter the branch label in the ‘To URL’ box. For example /branches/1.1 Choose Head revision Check Switch working copy Click OK Make any changes to branch Make any changes to trunk Commit any changes For this example I copied the project to another location prior to branching and made changes to that using Notepad++. Then committed it to SVN, as this directory is mapped to the trunk, that is what gets updated.   Merge Trunk with Branch Right click project root in Windows Explorer > TortoiseSVN > Merge Choose ‘Merge a range of revisions’ In ‘URL to merge from’ choose your trunk Click Next, then the ‘test merge’ button. This will highlight any conflicts. Here we have one conflict we will need to resolve because we made a change and checked in to trunk earlier Click merge. Now we have the opportunity to edit that conflict This will open up TortoiseMerge which will allow us to resolve the issue. In this case I want both changes. Perform an Update then Commit Reloading in Visual Studio shows we have all changes that have been made to both trunk and branch. Merge Branch with Trunk Switch working copy by right clicking project root in Windows Explorer > TortoiseSVN > switch Switch to the trunk then ok Right click project root in Windows Explorer > TortoiseSVN > merge Choose ‘Reintegrate a branch’ In ‘From URL’ choose your branch then next Click ‘Test merge’, this shouldn’t show any conflicts Click Merge Perform Update then Commit Open project in Visual Studio, we now have all changes. So there we have it we are connected back to the trunk and have all the updates merged.

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • How to attach WAR file in email from jenkins

    - by birdy
    We have a case where a developer needs to access the last successfully built WAR file from jenkins. However, they can't access the jenkins server. I'd like to configure jenkins such that on every successful build, jenkins sends the WAR file to this user. I've installed the ext-email plugin and it seems to be working fine. Emails are being received along with the build.log. However, the WAR file isn't being received. The WAR file lives on this path in the server: /var/lib/jenkins/workspace/Ourproject/dist/our.war So I configured it under Post build actions like this: The problem is that emails are sent but the WAR file isn't being attached. Do I need to do something else?

    Read the article

  • Configure Jenkins and Tomcat using Puppet on Vagrant

    - by ex3v
    I'm playing with setting up my first Spring + jenkins + Tomcat CI dev environment. For now it's just a test/fun phase, but in the near future I'll be starting new project with my coworkers. That's the reason that I want development environment virtualized and exactly te same on every development machine, as well as on production server. I choosen to use Vagrant and to try to write puppet scripts that not only install everything, but also configure everything so each of us will have the same jenkins plugins, same jenkins and tomcat login and password, and literally after calling vagrant up we are ready to work. What I managed to do so far is installation of stuff needed and port forwarding. My vagrantfile looks like this (comments stripped): VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise32" config.vm.box_url = "http://files.vagrantup.com/precise32.box" config.vm.network :forwarded_port, guest: 80, host: 8090 config.vm.network :forwarded_port, guest: 8080, host: 8091 config.vm.network :private_network, ip: "192.168.33.10" config.vm.provision :puppet do |puppet| puppet.manifests_path = "puppet/" puppet.manifest_file = "default.pp" puppet.options = ['--verbose'] end end And this is my puppet file: Exec { path => [ "/bin/", "/sbin/" , "/usr/bin/", "/usr/sbin/" ] } class system-update { exec { 'apt-get update': command => 'apt-get update', } $sysPackages = [ "build-essential" ] package { $sysPackages: ensure => "installed", require => Exec['apt-get update'], } } class tomcat { package { "tomcat": ensure => present, require => Class["system-update"], } service { "tomcat": ensure => "running", require => Package["tomcat"], } } class jenkins { package { "jenkins": ensure => present, require => Class["system-update"], } service { "jenkins": ensure => "running", require => Package["jenkins"], } } include system-update include tomcat include jenkins Now, when I hit vagrant provision and go to http://localhost:8091/ I can see jenkins running, so above script works good. Next step is configurating jenkins and tomcat by extending above puppet scripts. I'm pretty green when it comes to CI. After wandering around web I've found few tutorials about jenkins configuration (here's one of them). I really want to move configuration presented in this tutorial to puppet file, so when I spread my vagrantfile and puppet file between my coworkers, I will be sure that everyone has exactly te same setup. Unfortunately I'm also green about using puppet, I don't know how to do this. Any help will be apreciated.

    Read the article

  • Code Measuring and Metrics Tools?

    - by David
    I'm in the process of setting up a build server for personal projects. This server will handle all the normal CI stuff, including running large suites of tests (unit, integration, automated UI). While I'm working out the kinks for including code coverage output with MSTest, it occurs to me that there may be lots of tools out there which give me additional metrics other than just code coverage. FxCop comes to mind as an example. Though I'm sure there are others. Anything that can generate useful reportable data and metrics would be good. Whether it's class dependency charts (looking for Law of Demeter violations, for example), analyses of the uses of classes/functions (looking for a function that isn't used in the system other than just the tests, for example), and so on. I'm not sure the right way to formulate the question, since polling questions or "What's your favorite code analysis tool" aren't very good. But I'm essentially just looking for recommendations on what metrics to gather and the tools that can gather them. The eventual vision for something like this is to have the CI server run a bunch of automated tests and analysis tools and track performance metrics over time. Imagine a dashboard full of graphs plotting these metrics over time. The lines should all relatively be at an equilibrium, and if one starts to stray toward the negative then it's an early indication of problems with the code. In the age old struggle to quantify code quality with management, this sounds like a potentially helpful means of doing just that.

    Read the article

  • Databases and the CI server

    - by mlk
    I have a CI server (Hudson) which merrily builds, runs unit tests and deploys to the development environment but I'd now like to get it running the integration tests. The integration tests will hit a database and that database will be consistently being changed to contain the data relevant to the test in question. This however leads to a problem - how do I make sure the database is not being splatted with data for one test and then that data being override by a second project before the first set of tests complete? I am current using the "hope" method, which is not working out too badly at the moment, but mostly due to the fact that we only have a small number of integration tests set up on CI. As I see it I have the following options: Test-local (in memory) databases I'm not sure if any in-memory databases handle all the scaryness of Oracles triggers and packages etc, and anything less I don't feel would be a worth while test. CI Executor-local databasesA fair amount of work would be needed to set this up and keep 'em up to date, but defiantly an option (most of the work is already done to keep the current CI database up-to-date). Single "integration test" executorLikely the easiest to implement, but would mean the integration tests could fall quite far behind. Locking the database (or set of tables) I'm sure I've missed some ways (please add them). How do you run database-based integration tests on the CI server? What issues have you had and what method do you recommend? (Note: While I use Hudson, I'm happy to accept answers for any CI server, the ideas I'm sure will be portable, even if the details are not). Cheers,      Mlk

    Read the article

  • Configure Jenkins and Tomcat using Puppet

    - by ex3v
    I'm trying to setup Spring dev environment (jenkins, tomcat) on Vagrant. What I really want to achieve is to limit config to only Puppet scripts, so I can share it with my colleagues and work together on the same environment. So far I managed to set up simple scripts to install jenkins, tomcat and so on, they work fine. What about jenkins configuration though? I'm pretty green in jenkins usage and configuration, not sure if I'm doing it the right way... I found this article and I want to migrate whole setup described in it to Puppet. Any ideas? Thanks.

    Read the article

  • In which cases build artifacts will be different in different environments

    - by Sundeep
    We are working on automation of deployment using Jenkins. We have different environments - DEV, UAT, PROD. In SVN, we are tagging each release and placing same binaries in DEV, UAT, PROD. The artifacts already contains config files w.r.t each environment but I am not understanding why we are storing binaries in environment folder again. Are there any scenarios where deployment might be different for different environments.

    Read the article

  • Anyone used Jenkins with Sparkle (the cocoa app update framework)?

    - by Amy Worrall
    Pushing new Sparkle releases of our internal apps is a pain. I have to make the build, make the release notes file, sign the .zip with the private key, and add a new entry to the appcast file tying everything together. I'd love it if Jenkins could help: use the commit messages for the release notes, and automatically do the rest of it. Should I be looking at writing a new Jenkins plugin, or using shell scripting, or is there something already that will do what I want? (A quick Google didn't find anything.) I'm new to Jenkins, and am still trying to get a feel for what it can do. Do any other maintainers of Cocoa apps have any tips?

    Read the article

  • Automating release management and CI on python projects under mercurial VCS

    - by ms4py
    I have a set of Python projects which are under the mercurial VCS. I would like to automate the following tasks: Run the test suite for every commit (CI). Make a source distribution for every commit, which has a tag in mercurial. This is regarded as a new release. Copy the distribution to a special repository. There is Jenkins as a proposal for similar questions, but I'm not sure if it can handle the release management like intended.

    Read the article

  • Getting Started with jqChart for ASP.NET Web Forms

    - by jqChart
    Official Site | Samples | Download | Documentation | Forum | Twitter Introduction jqChart takes advantages of HTML5 Canvas to deliver high performance client-side charts and graphs across browsers (IE 6+, Firefox, Chrome, Opera, Safari) and devices, including iOS and Android mobile devices. Some of the key features are: High performance rendering. Animaitons. Scrolling/Zoooming. Support for unlimited number of data series and data points. Support for unlimited number of chart axes. True DateTime Axis. Logarithmic and Reversed axis scale. Large set of chart types - Bar, Column, Pie, Line, Spline, Area, Scatter, Bubble, Radar, Polar. Financial Charts - Stock Chart and Candlestick Chart. The different chart types can be easily combined.  System Requirements Browser Support jqChart supports all major browsers: Internet Explorer - 6+ Firefox Google Chrome Opera Safari jQuery version support jQuery JavaScript framework is required. We recommend using the latest official stable version of the jQuery library. Visual Studio Support jqChart for ASP.NET does not require using Visual Studio. You can use your favourite code editor. Still, the product has been tested with several versions of Visual Studio .NET and you can find the list of supported versions below: Visual Studio 2008 Visual Studio 2010 Visual Studio 2012 ASP.NET Web Forms support Supported version - ASP.NET Web Forms 3.5, 4.0 and 4.5 Installation Download and unzip the contents of the archive to any convenient location. The package contains the following folders: [bin] - Contains the assembly DLLs of the product (JQChart.Web.dll) for WebForms 3.5, 4.0 and 4.5. This is the assembly that you can reference directly in your web project (or better yet, add it to your ToolBox and then drag & drop it from there). [js] - The javascript files of jqChart and jqRangeSlider (and the needed libraries). You need to include them in your ASPX page, in order to gain the client side functionality of the chart. The first file is "jquery-1.5.1.min.js" - this is the official jQuery library. jqChart is built upon jQuery library version 1.4.3. The second file you need is the "excanvas.js" javascript file. It is used from the versions of IE, which dosn't support canvas graphics. The third is the jqChart javascript code itself, located in "jquery.jqChart.min.js". The last one is the jqRangeSlider javascript, located in "jquery.jqRangeSlider.min.js". It is used when the chart zooming is enabled. [css] - Contains the Css files that the jqChart and the jqRangeSlider need. [samples] - Contains some examples that use the jqChart. For full list of samples plese visit - jqChart for ASP.NET Samples. [themes] - Contains the themes shipped with the products. It is used from the jqRangeSlider. Since jqRangeSlider supports jQuery UI Themeroller, any theme compatible with jQuery UI ThemeRoller will work for jqRangeSlider as well. You can download any additional themes directly from jQuery UI's ThemeRoller site available here: http://jqueryui.com/themeroller/ or reference them from Microsoft's / Google's CDN. <link rel="stylesheet" type="text/css" media="screen" href="http://ajax.aspnetcdn.com/ajax/jquery.ui/1.8.21/themes/smoothness/jquery-ui.css" /> The final result you will have in an ASPX page containing jqChart would be something similar to that (assuming you have copied the [js] to the Script folder and [css] to Content folder of your ASP.NET site respectively). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="samples_cs.Default" %> <%@ Register Assembly="JQChart.Web" Namespace="JQChart.Web.UI.WebControls" TagPrefix="jqChart" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head runat="server"> <title>jqChart ASP.NET Sample</title> <link rel="stylesheet" type="text/css" href="~/Content/jquery.jqChart.css" /> <link rel="stylesheet" type="text/css" href="~/Content/jquery.jqRangeSlider.css" /> <link rel="stylesheet" type="text/css" href="~/Content/themes/smoothness/jquery-ui-1.8.21.css" /> <script src="<% = ResolveUrl("~/Scripts/jquery-1.5.1.min.js") %>" type="text/javascript"></script> <script src="<% = ResolveUrl("~/Scripts/jquery.jqRangeSlider.min.js") %>" type="text/javascript"></script> <script src="<% = ResolveUrl("~/Scripts/jquery.jqChart.min.js") %>" type="text/javascript"></script> <!--[if IE]><script lang="javascript" type="text/javascript" src="<% = ResolveUrl("~/Scripts/excanvas.js") %>"></script><![endif]--> </head> <body> <form id="form1" runat="server"> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="GetData" TypeName="SamplesBrowser.Models.ChartData"></asp:ObjectDataSource> <jqChart:Chart ID="Chart1" Width="500px" Height="300px" runat="server" DataSourceID="ObjectDataSource1"> <Title Text="Chart Title"></Title> <Animation Enabled="True" Duration="00:00:01" /> <Axes> <jqChart:CategoryAxis Location="Bottom" ZoomEnabled="true"> </jqChart:CategoryAxis> </Axes> <Series> <jqChart:ColumnSeries XValuesField="Label" YValuesField="Value1" Title="Column"> </jqChart:ColumnSeries> <jqChart:LineSeries XValuesField="Label" YValuesField="Value2" Title="Line"> </jqChart:LineSeries> </Series> </jqChart:Chart> </form> </body> </html>   Official Site | Samples | Download | Documentation | Forum | Twitter

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • VCS strategy with TeamCity and CI

    - by Luke Puplett
    I'm planning a strategy which seeks to allow automated deployment of a website codebase into QA and production on check-in. We're using the fabulous TeamCity. We want to control release to live production; i.e. not have every check-in on Trunk go live. So my plan is to use Trunk as QA. Committing to Trunk triggers deployment to QA. I will then have a Production branch which also triggers deployment on commit, to the live site. The idea is simply that Trunk represents the mainline codebase but it hasn't gone live yet. We can branch features and do daily pulls from Trunk into those feature branches as per normal and merge/re-integrate into Trunk when we're happy for it to go to QA. When the BAs give the nod, we then smash a bottle of champagne and merge Trunk to Production and out she goes. I've never seen it done like this. Other greenfield CI strategies involve hiding features and code from production via config - this codebase can't cope with that - or just having CI on QA and taking cuts and manually pushing to live. Does my plan sound alright?

    Read the article

  • How should I expand Jenkins to help me release? [closed]

    - by Amy Worrall
    Pushing new Sparkle releases of our internal apps is a pain. I have to make the build, make the release notes file, sign the .zip with the private key, and add a new entry to the appcast file tying everything together. I'd love it if Jenkins could help: use the commit messages for the release notes, and automatically do the rest of it. Should I be looking at writing a new Jenkins plugin, or using shell scripting, or is there something already that will do what I want? (A quick Google didn't find anything.)

    Read the article

  • What is Continous Integration (CI) and how is it useful?

    - by Geek
    Can some one explain to me the concept of Continious Integration, how it works in an easy to understand way? And why should a company adopt CI in their code delivery workflow? I am a developer and my company (mainly the build team ) uses Team City. As a developer I always checkout, update and commit code to SVN but never really had to bother about TeamCity or CI in general. So I would like to understand what is the usefulness of CI? Is CI a part of Agile methodologies?

    Read the article

  • Where do the responsibilities of build tools end and those of CI tools start?

    - by BrandonV
    In the delivery of software, and within the sense of the deployment pipeline, where do the responsibilities of build tools, like Maven, end, and the responsibilities of CI start? As a rough example of a problem that arises; should build tools have any responsibility to the configuration and execution of acceptance tests when they are further down the pipeline than actually building the artifact? I'd like an answer that addresses in the sense of deployment lifecycle phases rather than in specifics, like my example. Although examples would help bolster the answer.

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • CI - How long is continous?

    - by Andy
    We currently are using CCNet as our continous integration server. Most projects check for changes every 30 seconds (the default) and if needed perform a build (unit tests, stylecop, fxcop, etc). We've gotten quite a few projects now, and the server spends most of its time near 100% cpu utilization. This has alarmed some of the development team, even though the server is responsive and builds are still about the same length of time they've always been. Its been suggested that we lower the check interval to about five minutes. To me that seems too long, and we risk people committing code and then going home for the weekend and now there's a broken build possibly holding up others. In response, the suggestion is that if someone needs to know the results they can force the build. But that seems to defeat the purpose of CI, as I thought it was supposed to be automated. My proposed solution is just to get another build server and split the builds amongst the servers. Am I thinking about this the wrong way, or is there a point where if integration isn't often enough you're not really doing CI anymore?

    Read the article

  • System.Windows.Forms.DataVisualization Namespace Fine in One Class but Not in Another

    - by jxpx777
    Hi. I'm getting this error The type or namespace name 'DataVisualization' does not exist in the namespace 'System.Windows.Forms' (are you missing an assembly reference?) Here is my using section of the class: using System; using System.Collections; using System.Collections.Generic; using System.Windows.Forms.DataVisualization.Charting; using System.Windows.Forms.DataVisualization.Charting.Borders3D; using System.Windows.Forms.DataVisualization.Charting.ChartTypes; using System.Windows.Forms.DataVisualization.Charting.Data; using System.Windows.Forms.DataVisualization.Charting.Formulas; using System.Windows.Forms.DataVisualization.Charting.Utilities; namespace myNamespace { public class myClass { // Usual class stuff } } The thing is that I am using the same DataVisualization includes in another class. The only thing that I can think that is different is that the classes that are giving this missing namespace error are Solution Items rather than specific to a project. The projects reference them by link. Anyone have thoughts on what the problem is? I've installed the chart component, .Net 3.5 SP1, and the Chart Add-in for Visual Studio 2008. UPDATE: I moved the items from Solution Items to be regular members of my project and I'm still seeing the same behavior. UPDATE 2: Removing the items from the Solution Items and placing them under my project worked. Another project was still referencing the files which is why I didn't think it worked previously. I'm still curious, though, why I couldn't use the namespace when the classes were Solution Items but moving them underneath a project (with no modifications, mind you) instantly made them recognizable. :\

    Read the article

  • Insert Special Characters & Coding in Online Forms in Firefox

    - by Asian Angel
    If you are active in forums or comment areas on different websites then you most likely use some type of special characters, HTML, or other code throughout the day. Now you can easily insert commonly used “items” with the SKeys extension for Firefox. Your New Special Text Edit Bar After installing the extension you will see the new toolbar that has been added to your browser. These are the kinds of text that can be added to online comment areas, forums, or other website areas that allow their use: Special characters HTML tags BB codes Wiki characters All that you will need to do is click on the appropriate special character or code to insert it into the website text area. The first two toolbar items are each singular in their function and insert the following types of text. A look at the special characters available for your use. The wiki code menu. The HTML menu… And the BB code menu. Here is a quick sample using the HTML menu…much better than doing it manually. This should definitely help speed things up throughout the day. Our only disappointment during testing was not being able to add additional items (i.e. characters, tags) to the toolbar at this time. Conclusion While a new toolbar may not be for everyone this extension can certainly prove useful when you need to quickly add special characters or coding in website text areas. Links Download the SKeys extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Quick Tip: Use Tab Characters in Textarea Boxes in FirefoxUse Special Characters in WindowsUsing Password Phrases For Better SecuritySearch For Rows With Special Characters in SQL ServerExpand Text Areas in Web Forms in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Insert Special Characters & Coding in Online Forms in Firefox

    - by Asian Angel
    If you are active in forums or comment areas on different websites then you most likely use some type of special characters, HTML, or other code throughout the day. Now you can easily insert commonly used “items” with the SKeys extension for Firefox. Your New Special Text Edit Bar After installing the extension you will see the new toolbar that has been added to your browser. These are the kinds of text that can be added to online comment areas, forums, or other website areas that allow their use: Special characters HTML tags BB codes Wiki characters All that you will need to do is click on the appropriate special character or code to insert it into the website text area. The first two toolbar items are each singular in their function and insert the following types of text. A look at the special characters available for your use. The wiki code menu. The HTML menu… And the BB code menu. Here is a quick sample using the HTML menu…much better than doing it manually. This should definitely help speed things up throughout the day. Our only disappointment during testing was not being able to add additional items (i.e. characters, tags) to the toolbar at this time. Conclusion While a new toolbar may not be for everyone this extension can certainly prove useful when you need to quickly add special characters or coding in website text areas. Links Download the SKeys extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Quick Tip: Use Tab Characters in Textarea Boxes in FirefoxUse Special Characters in WindowsUsing Password Phrases For Better SecuritySearch For Rows With Special Characters in SQL ServerExpand Text Areas in Web Forms in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Enable Check Box Selection in Windows 7 OnlineOCR – Free OCR Service Betting on the Blind Side, a Vanity Fair article 30 Minimal Logo Designs that Say More with Less LEGO Digital Designer – Free Create a Personal Website Quickly using Flavors.me

    Read the article

  • Does Google submit HTML forms?

    - by Saeed Neamati
    I have a web page, say http://domain/purchase and in this page, I have a web form. User, on submitting this form (which has validation, both client-side and server side and won't be validated until fields are filled appropriately), would be redirected to another page, where (s)he can choose other things, and specify other settings and then purchase our product. Say the second page is http://domain/options. So, user comes to our site and visits http://domain/purchase, fills the form, submits it, and then would be redirected to the second page, http://doamin/options?parameter1=value1&parameter2=value2, which contains parameters from the first page. This is very common in passing parameters between web pages (or technically, between URLs). Now I was reviewing my website, and saw that Google had indexed some of my redirected web pages and URLs, like: http://domain/options?parameter1=value1&parameter2=value2 http://domain/options?parameter1=value3&parameter2=value4 http://domain/options?parameter1=value5&parameter2=value6 http://domain/options?parameter1=value7&parameter2=value8 http://domain/options?parameter1=value9&parameter2=value10 This means that Google Bot has visited our http://domain/purchase page, and has filled our form, and has submitted it, and was being redirected to the other URL, with corresponding parameters. This is the only way that makes sense to me. Does Google really fills forms? PS: All parameters are meaningful, meaning that they are not filled arbitrarily. For example, the phone parameter in indexed pages has correct phone numbers. How is it possible?

    Read the article

  • C++ Windows Forms application unhandled exception error when textbox empty

    - by cmorris1441
    I'm building a temperature conversion application in Visual Studio for a C++ course. It's a Windows Forms application and the code that I've written is below. There's other code to of course, but I'm not sure you need it to help me. My problem is, when I run the application if I don't have anything entered into either the txtFahrenheit or txtCelsius2 textboxes I get the following error: "An unhandled exception of type 'System.FormatException' occurred in mscorlib.dll" The application only works right now when a number is entered into both of the textboxes. I was told to try and use this: Double::TryParse() but I'm brand new to C++ and can't figure out how to use it, even after checking the MSDN library. Here's my code: private: System::Void btnFtoC_Click(System::Object^ sender, System::EventArgs^ e) { // Convert the input in the Fahrenheit textbox to a double datatype named fahrenheit for manipulation double fahrenheit = Convert::ToDouble(txtFahrenheit->Text); // Set the result string to F * (5/9) -32 double result = fahrenheit * .5556 - 32; // Set the Celsius text box to display the result string txtCelsius->Text = result.ToString(); } private: System::Void btnCtoF_Click(System::Object^ sender, System::EventArgs^ e) { // Convert the input in the Celsius textbox to a double datatype name celsius for manipulation double celsius = Convert::ToDouble(txtCelsius2->Text); // Set the result2 string to C * (9/5) + 32 double result2 = celsius * 1.8 + 32; // Set the Fahrenheit text box to display the result2 string txtFahrenheit2->Text = result2.ToString(); }

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >