Search Results

Search found 4724 results on 189 pages for 's unit'.

Page 136/189 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

  • Situations to prefer Apache Lucene over Solr?

    - by Karussell
    There are several advantages to use Solr (out-of-the-box facetting search, grouping, replication, http administration vs. luke, ...). Even if I embed a search-functionality in my Java application I could use SolrJ to avoid the HTTP trade-off when using Solr. So, when would you recommend to use "pure-Lucene"? Does it have a better performance or requires less RAM? Is it better unit-testable? PS: I am aware of this question.

    Read the article

  • 'Programming by Coincidence' Excercise: Java File Writer

    - by Tapas
    I just read the article Programming by Coincidence. At the end of the page there are excercises. A few code fragments that are cases of "programming by coincidence". But I cant figure out the error in this piece: This code comes from a general-purpose Java tracing suite. The function writes a string to a log file. It passes its unit test, but fails when one of the Web developers uses it. What coincidence does it rely on? public static void debug(String s) throws IOException { FileWriter fw = new FileWriter("debug.log", true); fw.write(s); fw.flush(); fw.close(); } What is wrong about this?

    Read the article

  • BackgroundWorker not working with TeamCity NUnit runner

    - by Catalin DICU
    I'm using NUnit to test View Models in a WPF 3.5 application and I'm using the BackgroundWorker class to execute asynchronous commands.The unit test are running fine with the NUnit runner or ReSharper runner but fail on TeamCity 5.1 server. How is it implemented : I'm using a ViewModel property named IsBusy and set it to false on BackgroundWorker.RunWorkerCompleted event. In my test I'm using this method to wait for the BackgroundWorker to finish : protected void WaitForBackgroundOperation(ViewModel viewModel) { int count = 0; while (viewModel.IsBusy) { RunBackgroundWorker(); if (count++ >= 100) { throw new Exception("Background operation too long"); } Thread.Sleep(100); } } private static void RunBackgroundWorker() { Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new ThreadStart(delegate { })); System.Windows.Forms.Application.DoEvents(); } Well, sometimes it works and sometimes it hangs the build. I suppose it's the Application.DoEvents() but I don't know why...

    Read the article

  • UnitOfWork & StrcutureMap & Desktop Application

    - by Afshin Gh
    When developing Web App and using StrcutureMap i normally use HybridHttpOrThreadLocalScoped for my UnitOfWork Part. (unit of work starts up per web request and ...) What about a desktop app(windows service, console, ...)? There is no session and request in a desktop app. How should i manage UoW in this situation? any good reference or article? I don't want to make it singleton. What should I do?

    Read the article

  • Experiences with UI Automation and WPF

    - by soren.enemaerke
    We are developing a rather large WPF based application and would like to include some automated UI testing in our test suite (which already contains a number of unit tests). The UI Automation Framework from Microsoft partly sounds like a perfect fit for programatically launching and interacting with the application in a test setup. However, I've struggled to find solid references for samples and experiences with the technology, the articles and small samples available on MSDN is not enough to convince me that it is a solid choice. So, does anybody have real world experiences using the UI Automation Framework in their test suite? What are the caveats and the gotchas? Any best practices when written tests scripts, can you "record and replay" to a scriptable format, how much should you facilitate the testing from the application, how did you incorporate it in the automatic build? Should we be looking in another direction than the UI Automation Framework? Feel free to post you experiences here or link to some good references I might have missed

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • Anyone have an XSL to convert Boost.Test XML logs to a presentable format?

    - by Stuart Lange
    I have some C++ projects running through cruisecontrol.net. As a part of the build process, we compile and run Boost.Test unit test suites. I have these configured to dump XML log files. While the format is similar to JUnit/NUnit, it's not quite the same (and lacks some information), so cruisecontrol.net is unable to pick them up. I am wondering if anyone has created (or knows of) an existing XSL transform that will convert Boost.Test results to JUnit/NUnit format, or alternatively, directly to a presentable (html) format. Thanks!

    Read the article

  • Linq to sql add/update in different methods with different datacontexts

    - by Kurresmack
    I have to methods, Add() and Update() which both create a datacontext and returns the object created/updated. In my unit test I call first Add(), do some stuff and then call Update(). The problem is that Update() fails with the exception: System.Data.Linq.DuplicateKeyException: Cannot add an entity with a key that is already in use.. I understand the issue but want to know what to do about it? I've read a bit about how to handle multiple datacontext objects and from what I've heard this way is OK. I understand that the entity is still attached to the datacontext in Add() but I need to find out how to solve this? Thanks in advance

    Read the article

  • Handling changes in an interface shared across multiple solutions?

    - by Anthony Mastrean
    Our "main" solution is the development code: shared libraries, services, UI projects, etc. The other solution is an integration and automated tests solution. It references several of the development projects. The reason it is separate is to avoid interference with the development solution's unit test VSMDI file. And to allow us to play with different execution methods (other test runners, like Gallio or StoryTeller) without interfering with the development solution. Recently, an interface changed in the development solution, one of our test mocks implemented that interface. But, it was not updated because there was no warning at compile time because it was in another solution. This broke our CI build. Does anyone have a similar setup? How do you handle these issues, do you follow a strict procedure or is there some kind of technical answer?

    Read the article

  • Viewing Code Coverage Results outside of Visual studio

    - by Wonchance
    I've got some unit tests, and got some code coverage data. Now, I'd like to be able to view that code coverage data outside of visual studio, say in a web browser. But, when I export the code coverage to an xml file, I can't do anything with it. Are there readers out there for this? Do I have to write an xml parser and then display it how I want it (seems like a waste since visual studio already does this.) Seems kinda silly to have to take a screenshot of my code coverage results as my "report" Suggestions?

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • Some problem running NUnit

    - by prosseek
    I have NUnit installed at this directory. C:\Program Files\NUnit 2.5.5\bin\net-2.0 When I try to run my unit test (mut.dll) in some random directory. I get the following error. I have to copy the mut.dll under the NUnit directory in order to run it. ProcessModel: Default DomainUsage: Single Execution Runtime: net-2.0 Could not load file or assembly 'nunit.framework, Version=2.5.5.10112, Culture=n eutral, PublicKeyToken=96d09a1eb7f44a77' or one of its dependencies. The system cannot find the file specified. What's wrong? Is there anything that I have to configure to run NUNit under any directory?

    Read the article

  • How much of Grails GORM to test?

    - by Lloyd Meinholz
    Is there a "best practice" or defacto standard with how much of the GORM functionality one should test in the unit/functional tests? My take is that one should probably do most of the domain testing as functional tests so that you get the full grails environment. But what do you test? Inserts, updates, deletes? Do you test constraints even though they were probably more thoroughly tested by the grails release? Or do you just assume that GORM does what it is supposed to do and move to other parts of the application?

    Read the article

  • C++ linking issue on Visual Studio 2008 when crosslinking different projects on same solution

    - by Luís Guilherme
    I'm using Google Test Framework to set some unit tests. I have got three projects in my solution: FN (my project) FN_test (my tests) gtest (Google Test Framework) I set FN_test to have FN and gtest as references (dependencies), and then I think I'm ready to set up my tests (I've already set everyone to /MTd (not doing this was leading me to linking errors before)). Particularly, I define a class called Embark in FN I would like to test using FN_test. So far, so good. Thus I write a classe called EmbarkTest using googletest, declare a member Embark* and write inside the constructor: EmbarkTest() { e = new Embark(900,2010); } Then , F7 pressed, I get the following: 1>FN_test.obj : error LNK2019: unresolved external symbol "public: __thiscall Embark::Embark(int,int)" (??0Embark@@QAE@HH@Z) referenced in function "protected: __thiscall EmbarkTest::EmbarkTest(void)" (??0EmbarkTest@@IAE@XZ) 1>D:\Users\lg\Product\code\FN\Debug\FN_test.exe : fatal error LNK1120: 1 unresolved externals Does someone know what have I done wrong and/or what can I do to settle this?

    Read the article

  • UnitTest++ creates cmd windows, which can't be closed

    - by Simon
    Hello, I have a setup for using UnitTest++ like this in VS2008. Sometimes the cmd window, which shows the console output of the unit tests just hangs. I can move the window, resize and stuff, but I'm unable to close it. I see the window in the App tab of the Task Manager, but not in the Process tab, "Switch to process" doesn't work either. Stop debugging or closing VS is also no help, it seems VS has lost control over this window. If this cmd window is lost, I'm unable to shutdown my computer, which is pretty annoying Any hints?

    Read the article

  • How should moq's VerifySet be called in VB.net

    - by Bender
    I am trying to test that a property has been set but when I write this as a unit test: moqFeed.VerifySet(Function(m) m.RowAdded = "Row Added") moq complains that "Expression is not a property setter invocation" My complete code is Imports Gallio.Framework Imports MbUnit.Framework Imports Moq <TestFixture()> Public Class GUI_FeedPresenter_Test Private moqFeed As Moq.Mock(Of IFeedView) <SetUp()> Sub Setup() moqFeed = New Mock(Of IFeedView) End Sub <Test()> Public Sub New_Presenter() Dim pres = New FeedPresenter(moqFeed.Object) moqFeed.VerifySet(Function(m) m.RowAdded = "Row Added") End Sub End Class Public Interface IFeedView Property RowAdded() As String End Interface Public Class FeedPresenter Private _FeedView As IFeedView Public Sub New(ByVal feedView As IFeedView) _FeedView = feedView _FeedView.RowAdded = "Row Added" End Sub End Class I can't find any examples of moq in VB, I would be grateful for any examples.

    Read the article

  • Setting up gcov in Xcode 3.1

    - by Algorithmic
    I'm trying to setup my Xcode project to be instrumented with gcov so I can determine the code coverage of my unit tests. All of the documentation I find online talks about settings that I don't find in Xcode 3.1, though. An example: To work with Coverstory, first you need to set up your target to work with gcov. This requires turning on "Instrument Program Flow", "Generate Test Coverage Files" and linking with the gcov library. (Using Coverstory) The closest thing I can find to "Instrument Program Flow" and "Generate Test Coverage Files" in my build settings is "Generate Profiling Code", which doesn't appear to do what I want it to do. Am I looking in the wrong place for these settings or are all of the examples I'm finding online stale?

    Read the article

  • Passing an instantiated class to concrete class derived by Castle Windsor

    - by Tr1stan
    I have a system that I'm using to test some new architecture. I have the following setup (In MVC2 .Net - C Sharp): View < Controller < Service < Repository < DB I'm using Castle Windsor as my DI (IoC) controller, and this is working just fine in both the Service and Repo layers. However, I'm now at a point where I would like to pass an Entity Framework (DatabaseNameEntity) to the constructor to the Service, and then to the Repo, so that I have something similar to a Unit of Work pattern per request (This feels like what I'm trying to achieve anyway) - and I'm having trouble working out how this can be done using Castle Windsor. Am I going off on a silly tangent? Any pointers appreciated.

    Read the article

  • IronPython/C# Float data comparison

    - by Gopalakrishnan Subramani
    We have WPF based application using Prism Framework. We have embedded IronPython and using Python Unit Test framework to automate our application GUI testing. It works very well. We have difficulties in comparing two float numbers. Example class MyClass { public object Value { get; set;} public MyClass() { Value = (float) 12.345; } } In IronPython When I compare the MyClass Instance's Value property with python float value(12.345), it says it doesn't equal This statement raises assert error self.assertEqual(myClassInstance.Value, 12.345) This statement works fine. self.assertEqual(float(myClassInstance.Value.ToString()), 12.345) When I check the type of the type(myClassInstance.Value), it returns Single in Python where as type(12.345) returns float. How to handle the C# float to Python comparison without explicit conversions?

    Read the article

  • Simple object creation with DIY-DI?

    - by Runcible
    I recently ran across this great article by Chad Perry entitled "DIY-DI" or "Do-It-Yourself Dependency Injection". I'm in a position where I'm not yet ready to use a IoC framework, but I want to head in that direction. It seems like DIY-DI is a good first step. However, after reading the article, I'm still a little confused about object creation. Here's a simple example: Using manual constructor dependency injection (not DIY-DI), this is how one must construct a Hotel object: PowerGrid powerGrid; // only one in the entire application WaterSupply waterSupply; // only one in the entire application Staff staff; Rooms rooms; Hotel hotel(staff, rooms, powerGrid, waterSupply); Creating all of these dependency objects makes it difficult to construct the Hotel object in isolation, which means that writing unit tests for Hotel will be difficult. Does using DIY-DI make it easier? What advantage does DIY-DI provide over manual constructor dependency injection?

    Read the article

  • managing library dependencies with Boost.Build and C++

    - by user931794
    I want to develop a project which can be built on a bunch of different platforms. The project code will be in C++, what's the the best way to manage libraries? I plan on using bjam as the build system because I'm going to be depending on Boost and their unit testing framework as well. The two dependent libraries are Boost itself and FLTK. The possibilities that come to mind for library management are: include build artifacts (binaries) and headers for all supported platforms in-tree include complete source for all dependent libraries in-tree, and somehow script them as dependencies A combination of 1 and 2, like node.js does with v8 inform the user that they need to build the libraries themselves and then have them on the PATH or in some special directory, like libcurl does with its dependencies What is the best approach here? The project will probably not grow beyond a few thousand lines over the next six months, but I want to make the right choice here so that I don't have to come back and switch everything around later.

    Read the article

  • CSharpCodeProvider: Why is a result of compilation out of context when debugging

    - by epitka
    I have following code snippet that i use to compile class at the run time. //now compile the runner var codeProvider = new CSharpCodeProvider( new Dictionary<string, string>() { { "CompilerVersion", "v3.5" } }); string[] references = new string[] { "System.dll", "System.Core.dll", "System.Core.dll" }; CompilerParameters parameters = new CompilerParameters(); parameters.ReferencedAssemblies.AddRange(references); parameters.OutputAssembly = "CGRunner"; parameters.GenerateInMemory = true; parameters.TreatWarningsAsErrors = true; CompilerResults result = codeProvider.CompileAssemblyFromSource(parameters, template); Whenever I step through the code to debug the unit test, and I try to see what is the value of "result" I get an error that name "result" does not exist in current context. Why?

    Read the article

  • BigDecimal precision not persisted with javax.persistence annotations

    - by dkaczynski
    I am using the javax.persistence API and Hibernate to create annotations and persist entities and their attributes in an Oracle 11g Express database. I have the following attribute in an entity: @Column(precision = 12, scale = 9) private BigDecimal weightedScore; The goal is to persist a decimal value with a maximum of 12 digits and a maximum of 9 of those digits to the right of the decimal place. After calculating the weightedScore, the result is 0.1234, but once I commit the entity with the Oracle database, the value displays as 0.12. I can see this by either by using an EntityManager object to query the entry or by viewing it directly in the Oracle Application Express (Apex) interface in a web browser. How should I annotate my BigDecimal attribute so that the precision is persisted correctly? Note: We use an in-memory HSQL database to run our unit tests, and it does not experience the issue with the lack of precision, with or without the @Column annotation.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >