Search Results

Search found 19376 results on 776 pages for 'buy or build'.

Page 14/776 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • CruiseControl.Net Build Publisher - Only publish compiled files

    - by FlySwat
    While setting up CruiseControl, I added a buildpublisher block to the publisher tasks: <buildpublisher> <sourceDir>C:\MyBuild\</sourceDir> <publishDir>C:\MyBuildPublished\</publishDir> <alwaysPublish>false</alwaysPublish> </buildpublisher> This works, but it copies the entire file contents of the build, I only want to copy the DLL's and .aspx pages, I don't need the source code to get published. Does anyone know of a way to filter this, or do I need to setup a task to run a RoboCopy script instead?

    Read the article

  • Help -- trying to build Apache ODE source code with Buildr

    - by Moon13
    Hi, i am trying to build APACHE ODE source code with Buildr using Ruby. I installed ruby and installed Buildr with it, but when i run the command rake package on the root of APACHE ODE source code it gives me this error C:\workspace2\APACHE_ODE_1.Xrake package --trace (in C:/workspace2/APACHE_ODE_1.X) rake aborted! uninitialized constant Gem::Requirement::OP_RE The gems i have installed C:\workspace2\APACHE_ODE_1.Xgem list * LOCAL GEMS * Antwrap (0.7.0) archive-tar-minitar (0.5.2) builder (2.1.2) buildr (1.3.5) highline (1.5.1) hoe (2.3.3) net-sftp (2.0.2) net-ssh (2.0.15) rake (0.8.7) rjb (1.1.6) rspec (1.2.8) rubyforge (1.0.5) rubygems-update (1.3.6) rubyzip (0.9.1) xml-simple (1.0.12)

    Read the article

  • Build Issue with multi module project

    - by vijay.shad
    Hi, I have a multi module web project. Four modules of the project are packaged as jar and added as dependency to the fifth module, which is packaged as war. When it is time to deploy the application i just run package on the war project and my war is created with all the dependencies. Now there is a problem. One of the my module have heavy changes. Now when i created war for my projects these changes was not reflected in the output war file(the jar in lib folder of war has still the old code). Can you please point the things i am missing from the release process? Why the old code is being packaged with the war? Can you please point some good resource for real file build process using maven? Regards, Vijay

    Read the article

  • Team build of Web Projects generates App_web_xxxx.dll files and TFSBuild.Proj Script

    - by Steve Johnson
    Hi all, I have a web application that has some non-web projects as well. When using Web Deployment, a single assembly is generated for all the aspx.vb files. When using Team Build (TS 2008), a lot number App_Web_xxx.dll file(s) are generated instead of a single assembly. How can i solve this problem and change the TFSBuild.proj file so that it can generate a single Web Assembly instead of a lot number of assemblies. Please help. Thanks Edit: I guess thats because the MERGE operation is not occurring like it used to happen for Web Deployment Project in my solution. How can i enable MERGE of App_web_*.dll files into a single Web.dll assembly file and delete the satellite assemblies? Here is my code from TFSBuild.proj file: (MY web project is in Release|.NET Config and all other projects within the solution are in Release|Any CPU) true .\Debug true true Web true false .\Release true true Web true Please tell me what are the corrections i need to do.,

    Read the article

  • Using Kate with Simple Build Tool (SBT)

    - by Stefan
    Hello I am working with the Kate editor based on the lack of other good tools for Scala development, I am also using IntelliJ however it still has some bugs, and are slow enough to make me impatient. I have just startet using both Kate and SBT, and in that regard I have a little challenge I hope there is an answer for out there on "The Internet". I am using the standard "Build plugin" in Kate and has changed the commands from make to sbt. This works fine, and I am also getting a error report when the sbt fails during compile time. However I really wish to know if it is possible to integrate the compile errors into Kate such that it would be shown under "Errors and Warnings" instead of just in the output tab, where I have to do a manual search for the compile errors. Im guessing that it has something to do with the format of the output, if that is the case maybe it is "just" a smaller adjustment I need to make to the parsing language.

    Read the article

  • Build Environment setup - Using .net, java, hudson, and ruby - Could really use a critique

    - by Jeff D
    I'm trying to figure out the best way to stitch together a fast, repeatable, unbreakable build process for the following environment. I've got a plan for how to do it, but I'd really appreciate a critique. (I'd also appreciate some sample code, but more on that later) Ecosystem - Logical: Website - asp.net MVC 2, .net 3.5, Visual Studio 2010. IIS 6, Facebook iframe application application. This website/facebook app uses a few services. An internal search api, an internal read/write api, facebook, and an IP geolocation service. More details on these below Internal search api - .net, restful, built using old school .ashx handlers. The api uses lucene, and a sql server database behind the scenes. My project won't touch the lucene code, but does potentially touch the database and the web services. internal read/write api - java, restful, running on Tomcat Facebook web services A mocking site that emulates the internal read/write api, and parts of the facebook api Hudson - Runs unit tests on checkin, and creates some installers that behave inconsistently. Ecosystem - Physical: All of these machines can talk to one another, except for Hudson. Hudson can't see any of the target machines. So code must be pulled, rather than pushed. (Security thing) 1. Web Server - Holds the website, and the read/write api. (The api itself writes to a replicated sql server environment). 2. Search Server - Houses the search api. 3. Hudson Server - Does not have permissions to push to any environment. They have to pull. 4. Lucene Server 5. Database Server Problem I've been trying to set this site up to run in a stress environment, but the number of setup steps, the amount of time it takes to update a component, the black-box nature of the current installers, and the time it takes to generate data into the test system is absolutely destroying my productivity. I tweak one setting, have to redeploy, restart in a certain order, resetup some of the settings, and rebuild test data. Errors result in headscratching, and then basically starting over. Very bad. This problem is complicated further by my stress testing. I need to be able to turn on and off different external components, so that I can effectively determine the scalability of each piece. I've got strategies in place for how to do that for each dependency, but it further complicates my setup strategy, because now each component has 2 options. A mock version, or a real version. Configurations everywhere must be updated accordingly. Goals Fast - I want to drop this from a 20 minute exercise when things go perfectly, to a 3 minute one Stupid simple - I want to tell the environment what to do with as few commands as possible, and not have to remember how to stitch the environments together Repeatable - I want the script to be idempotent. Kind of a corollary to the Stupid Simple thing. The Plan So Far Here's what I've come up with so far, and what I've come looking for feedback on: Use VisualStudio's new web.config transformations to permit easily altering configs based on envrionment. This solution isn't really sufficient though. I will leave web.config set up to let the site run locally, but when deploying elsewhere, I have as many as 6 different possible outputs for the stress environment alone (because of the mocks of the various dependencies), let alone the settings for prod, QA, and dev. Each of these would then require it's own setup, or a setup that would then post-process the configs. So I'm currently leaning toward just having the dev version, and a version that converts key configuration values into a ruby string interpolation syntax. ({#VAR_NAME} kinda thing) Create a ruby script for each server that is essentially a bootstrapping script. That is to say, it will do nothing but load the ruby code that does the 'real' work from hudson/subversion, so that the script's functionality can evolve with the application, making it easy to build the site at any point in time by reference the appropriate version of the script. So in a nutshell, this script loads another script, and runs it. The 'real' ruby script will then accept commandline parameters that describe how the environment should look. From there, 1 configuration file can be used, and ruby will download the current installers, run them, post-process the configs, restart IIS/Tomcat, and kick off any data setup code that is needed. So that's it. I'm in a real time crunch to get this site stress-tested, so any feedback that you think could abbreviate the time this might take would be appreciated. That includes a shameless request for sample ruby code. I've not gotten too much further than puts "Hello World". :-) Just guidance would be helpful. Is this something that Rake would be useful for? How would you recommend I write tests for this animal? (I use interfaces and automocking frameworks to mock out things like http requests in .net. With ducktyping, it seems that this might be easier, but I don't know how to tell my code to use a fake duck in test, but a real one in practice) Thanks all. Sorry for such such a long-winded, open-ended question.

    Read the article

  • TFS 2008 Build Script

    - by pm_2
    In TFS 2008, I am trying to modify a build script (TFSBuild.proj). I get the following warning: The element 'PropertyGroup' in namespace 'http://schemas.microsoft.com/developer/msbuild/2003' has invalid child element 'TeamProject' in namespace 'http://schemas.microsoft.com/developer/msbuild/2003'. Which is correct, The element PropertyGroup does indeed have a child called TeamProject. I’m making an assumption that this is caused because of the following line: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> The Xml Namespace doesn’t appear to exist as far as I can tell, although it looks like a standard one. Please can anyone tell me if this is a standard XML namespace, how or where I can view its contents and whether the warning that I am seeing may be caused by it?

    Read the article

  • TFS build problem: missing assembly in test output folder

    - by Herman
    Hi all, I am trying to integrate unit test cases with a TFS build in our new solution. I've include the following configuration line in my TFSBuild.proj <ItemGroup> <TestContainer Include="$(OutDir)\%2aTest.dll" /> </ItemGroup> Which I think is the correct configuration since I only have 1 test project. However, when I do this, some dll is missing in the output folder of the test case, hence failing most of my test case. Has anyone run into this problem before? Thanks!

    Read the article

  • Build Machine Configuration Recommendations?

    - by IPX Ares
    We have a new build machine to start using for our programming team. We are still trying to figure out how we want to organize everything to get the best configuration for building EXEs and DLLs. We are using VB6 and VB.Net 2005, and VSS2005. We were thinking of making working folders set for each project, release and support tickets. Does anyone have experience with a similar set up? What were your likes/dislikes? Any recommendations (New VSS IDs, folder configuration, setting working folder, updating/building files)?

    Read the article

  • CI Build status output alternatives

    - by Brett Rigby
    We currently use Cradiator to display the status of our continuous integration (CI) builds from CruseControl.net, on a 42" Samsung television on display high-up in our IT department. Cradiator is a great starting place, but we're getting to the point whereby we're having lots of projects on there and it's starting to get a bit 'full'. What I'd like to know is, what do you use to display your build statuses? Custom software? Off the shelf stuff? etc. Alternatively, I'm looking for ideas on how we improve on Cradiator.

    Read the article

  • VB.NET generates properties in the release build

    - by dattebayo
    I have a form and i drag and drop a control in VB.NET. I have a line say, private WithEvents radioButton RadioButton Also, I have a handler like, private void click(.....) Handles radioButton.Click { ... } Now, When I build this is .NET 3.5 in release mode, and see the generated code in reflector tool, the code is something like, Private Overridable Property radioButton As RadioButton . . . <AccessedThroughProperty("radioButton")> _ Private _radioButton As RadioButton Can someone tell me what is going on here? And how do I avoid the generation of new properties and fields? -datte

    Read the article

  • Hudson build fails when a user logs out of RDP session

    - by sjohnston
    We are using Hudson to build mixed C++/Java projects with an Ant script. It is running in Tomcat 6, on a Win XP virtual machine. I have noticed recently that when a user logs off the machine (from a remote desktop session), builds that are currently running tend to suddenly fail without an error message. Has anyone encountered anything similar or have an idea what might be causing this effect? I can post additional information about our setup if needed, I'm just not sure what's relevant in this case.

    Read the article

  • 2-D Codes in Retail

    - by David Dorf
    The UPC you find on packaging is a one-dimensional barcode that's been in use, in one form or another, since the 1970s. While its a good symbology to encode numbers like a product identifier, its not really big enough to hold much more. It also requires a barcode scanner (like those connected to the POS), although iPhone apps like RedLaser have proved a mobile camera can be made to work in many situations. The next generation barcodes are two-dimensional and therefore capable of holding much more information as well as being more conducive to cameras. The most popular format is the QR Code, widely used in Japan because almost every mobile phone has a built-in reader. A typical use for QR Codes is to embed a URL so that that a mobile phone can quickly navigate to the specified web page. QR Codes can be found on posters, billboards, catalogs, and circulars. Speaking of which, Best Buy recently put a QR code in their circular as shown below. If fact, they even updated their iPhone application to include a QR Code reader. I was able to scan the barcode above right from the screen with my iPhone without issues, even though its fairly small in this image. Clearly they are planning to incorporate more QR Codes in their stores and advertising. If you haven't seen QR Codes before, you're not looking hard enough. They are around and will continue to spread.

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • python scritp problem once build and package it

    - by Apache
    hi expert, i've written python script to scan wifi and send data to the server, i set interval value, so it keep on scanning and send the data, it read from config.txt file where i set the interval value to scan, i also add yes/no in my config file, so is 'no' it will scan only once and if 'yes' it will scan according to the interval level, my code as below import time,..... from threading import Event, Thread class RepeatTimer(Thread): def __init__(self, interval, function, iterations=0, args=[], kwargs={}): Thread.__init__(self) self.interval = interval self.function = function self.iterations = iterations self.args = args self.kwargs = kwargs self.finished = Event() def run(self): count = 0 while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations): self.finished.wait(self.interval) if not self.finished.is_set(): self.function(*self.args, **self.kwargs) count += 1 def cancel(self): self.finished.set() def scanWifi(self): #scanning process and sending data done here obj = JW() if status == "yes": t = RepeatTimer(int(intervalTime),obj.scanWifi) t.start() else: obj.scanWifi() once i package my code, its only run when i set my config file set to 'no' where it scan only once, but when i set my config file to 'yes', there is no progress at all, so i found that there is problem with my class RepeatTimer(Timer) once build, but don't know how to solve can anyone help me thanks

    Read the article

  • Good scanner to buy as of January 2010

    - by AlexV
    I'm planning to buy a scanner soon and since I didn't purchased one in years I wanted to know which brand / models are the "best". The ability to scan negatives is a nice to have but not 100% required. If you are specifing a model in particular, plase note that it must work under 64-bit OS (I'm using Windows 7 Ulitmate 64-bit). I'm not looking for a all-in-one solution like scanner/fax/printer. I want to use this to scan "old" photos from the non-digital era :) And sometimes to scan magazines and books. I am located in Canada it would be nice if your suggestion is available here. I usually buy online at NCIX and DirectCanada if it can help you see what's available there. Edit: My search leaded me to 2 particular models. I hesitate between Canon CanoScan 8800F and Epson V600. Anyone have good/bad words on either of them?

    Read the article

  • What type of SATA cable will I buy?

    - by Mehper C. Palavuzlar
    I've bought a new optical drive. It's a Samsung SH-S223C with SATA connection. Since it is OEM with no box, no cable came with it, so I need two cables now. Since I have an extra SATA power cable, it's OK, but I have to buy the data cable. What type of cable should I buy? If you please give a picture of it besides its type as well, I'll be happy. My motherboard is GigaByte P43-ES3G. TIA.

    Read the article

  • Which Power supply I should buy

    - by arayman
    I am going to buy a new PSU of CORSAIR for my IBM thinkcentre desktop. My previous original PSU was of 230 watts. I have not added any hardware in the CPU. It's in same condition as it was when I bought. On that basis, please suggest which power supply of Corsair and of how many watts will be suitable for me to buy, after referring to the under given weblink. http://www.corsair.com/en/power-supply-units.html Thanks.

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • python script problem once build and package it

    - by Apache
    hi expert, I've written python script to scan wifi and send data to the server, I set interval value, so it keep on scanning and send the data, it read from config.txt file where i set the interval value to scan, I also add yes/no in my config file, so is 'no' it will scan only once and if 'yes' it will scan according to the interval level, my code as below import time,..... from threading import Event, Thread class RepeatTimer(Thread): def __init__(self, interval, function, iterations=0, args=[], kwargs={}): Thread.__init__(self) self.interval = interval self.function = function self.iterations = iterations self.args = args self.kwargs = kwargs self.finished = Event() def run(self): count = 0 while not self.finished.is_set() and (self.iterations <= 0 or count < self.iterations): self.finished.wait(self.interval) if not self.finished.is_set(): self.function(*self.args, **self.kwargs) count += 1 def cancel(self): self.finished.set() def scanWifi(self): #scanning process and sending data done here obj = JW() if status == "yes": t = RepeatTimer(int(intervalTime),obj.scanWifi) t.start() else: obj.scanWifi() once I package my code, its only run when I set my config file set to 'no' where it scan only once, but when I set my config file to 'yes', there is no progress at all, so I found that there is problem with my class RepeatTimer(Timer) once build, but don't know how to solve can anyone help me thanks

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • Unit test and Code Coverage of Ant build scripts

    - by pablaasmo
    In our development environment We have more and more build scripts for ant to perform the build tasks for several different build jobs. These build scripts sometimes become large and do a lot of things and basically is source code in and of itself. So in a "TDD-world" we should have unit tests and coverage reports for the source code. I found AntUnit and BuildFileTest.java for doing unit tests. But it would also be interesting to know the code coverage of those unit tests. I have been searching google, but have not found anything. Does anyone know of a code coverage tool for Ant build scripts?

    Read the article

  • How to create base build and two simultaneously child builds?

    - by thoughts
    Using visual source safe, How to create a base build and on top of that two inherited new child builds of c# project. Base Build - Base Project - Base source version. [Should be able to modify source file and it reflacts in child builds] Child Build - First Project Child Build - Second Project I can able to modify individual source code of both child builds and also base build. If i change base build any source file then it should reflact to child build also. Is this possible with VSS? if yes how can i achieve it If any other source control provide this kind of facility then let me know.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >