Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 73/404 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Ruby on rails generates tests for you. Do those give a false sense of a safety net?

    - by Hamish Grubijan
    Disclaimer: I have not used RoR, and I have not generated tests. But, I will still dare to post this question. Quality Assurance is theoretically impossible to get 100% right in general (Undecidable problem ;), and it is hard in practice. So many developers do not understand that writing good automated tests is an art, and it is hard. When I hear that RoR generates the tests for you, I get very skeptical. It cannot be that easy. Testing is a general concept; it applies across languages. So does the concept of code contracts, it is similar for languages that support it. Code contracts do not generate themselves. The programmer must add the requirements and the promises manually, after doing some thinking about the algorithm / function. If a human gets it wrong, then the tools will propagate the error. Similarly with testing - it takes human judgement about what should happen. Tests do not write themselves, and we are far from the day when a business analyst can just have a conversation with a computer and tell it informally what the requirements are and have the computer do all the work. There is no magic ... how can RoR generate good tests for you? Please shed some light on this. Opinions are ok, for this is a community wiki. Thanks!

    Read the article

  • How do I run JUnit from NetBeans?

    - by FarmBoy
    I've been trying to understand how to start writing and running JUnit tests. When I'm reading this article: http://junit.sourceforge.net/doc/testinfected/testing.htm I get the the middle of the page and they write, "JUnit comes with a graphical interface to run tests. Type the name of your test class in the field at the top of the window. Press the Run button." I don't know how to launch this program. I don't even know which package it is in, or how you run a library class from an IDE. Being stuck, I tried this NetBeans tutorial: http://www.netbeans.org/kb/docs/java/junit-intro.html It seemed to be going OK, but then I noticed that the menu options for this tutorial for testing a Java Class Library are different from those for a regular Java application, or for a Java Web App. So the instructions in this tutorial don't apply generally. I'm using NetBeans 6.7, and I've imported JUnit 4.5 into the libraries folder. What would be the normal way to run JUnit, after having written the tests? The JUnit FAQ describes the process from the Console, and I'm willing to do that if that is what is typical, but given all that I can do inside netbeans, it seems hard to believe that there isn't an easier way. Thanks much. EDIT: If I right-click on the project and select "Test" the output is: init: deps-jar: compile: compile-test: test-report: test: BUILD SUCCESSFUL (total time: 0 seconds) This doesn't strike me as the desired output of a test, especially since this doesn't change whether the test condition is true or not. Any ideas?

    Read the article

  • Will IOC solve our problems?

    - by user127954
    Just trying to implement unit testing into a brownfield type system. Be aware i'm relatively new into the unit testing world. Its going to be a gradual migration of course because there are just so many areas of pain. The current problem i'm trying to solve is we followed a lot of bad practices from our VB6 days and in the conversion of our app to .Net. We have LOT AN LOTS of shared/static functions which call other shared functions and those call others and so on. Sometimes depedencies are passed in as parameters and sometimes they are just newed up within the calling function. I've already instructed our developers to stop creating shared functions and instead create instance members and only use those instance members off of interfaces but that doesn't alleviate the current situation. So you must recursively pass in each and every dependency at the top layer for each function in your code path and method signatures are turning into a mess. I'm hoping this is something that IOC will fix. Currently we are using NUnit/Moq and i'm starting to investigate StructureMap. So far i understand that you pretty much tell StructureMap for x interface i want to default to the concrete class y: ObjectFactory.Initialize(x=>{x.ForRequestType<IInterface>().TheDefaultIsConcreteType<MyClass>()}); Then to runtime: var mytype = ObjectFactory.GetInstance<IInterface>(); the IOC container will initialize the correct type for you. Not sure yet how to swap a fake in for the concrete type but hopefully thats simple. Again will IOC solve the problems i was talking about above? Is there a specific IOC framework that will do it better than StructureMap or can they all handle this situation. Any help would be much appreciated.

    Read the article

  • externalizing junit stub objects.

    - by Ajay
    Hi!    In my project we created stub files for testing junits in java(factories) itself. However, we have to externalize these stubs. After seeing a number of serializers/deserializers, we settled on using XStream to serialize and deserialize these stub objects. XStream works like a charm. Its pretty good at what it claims to be. Previously, we had a single factory class say AFactory which produced all the stubs needed for testing different test cases. Now when externalizing each of the stub generated, we hit a road block. We had to create 1 xml file for each stub produced by the factory. For example, public final class AFactory{ public static A createStub1(){ /*Code here */} public static A createStub2(){ /*Code here */} public static A createStub3(){ /*Code here */} } Now, when trying to move this stubs to external files, we had to create 1 xml file for each stub created(A-stub1.xml, A-stub2.xml and A-stub3.xml). The problem with this approach is that, it leads to proliferation of xml stub files. I was thinking, how about keeping all the stubs related to a single bean class in a single xml file. <?xml version="1.0"?> <stubs class="A"> <stub id="stub1"> <!-- Here comes the externalized xml stub representation --> </stub> <stub id="stub2"> </stub> </stubs> Is there a framework which allows you keep all the stub in xml representation in a single xml file as above ? Or What do you guys suggest should be the right approach to adhere to ?

    Read the article

  • java.lang.IllegalStateException: missing behavior definition for the preceding method call getMessag

    - by user362199
    Hi All, I'm using EasyMock(version 2.4) and TestNG for writing UnitTest. I have a following scenario and I cannot change the way class hierarchy is defined. I'm testing ClassB which is extending ClassA. ClassB look like this public class ClassB extends ClassA { public ClassB() { super("title"); } @Override public String getDisplayName() { return ClientMessages.getMessages("ClassB.title"); } } ClassA code public abstract class ClassA { private String title; public ClassA(String title) { this.title = ClientMessages.getMessages(title); } public String getDisplayName() { return this.title; } } ClientMessages class code public class ClientMessages { private static MessageResourse messageResourse; public ClientMessages(MessageResourse messageResourse) { this.messageResourse = messageResourse; } public static String getMessages(String code) { return messageResourse.getMessage(code); } } MessageResourse Class code public class MessageResourse { public String getMessage(String code) { return code; } } Testing ClassB import static org.easymock.classextension.EasyMock.createMock; import org.easymock.classextension.EasyMock; import org.testng.Assert; import org.testng.annotations.Test; public class ClassBTest { private MessageResourse mockMessageResourse = createMock(MessageResourse.class); private ClassB classToTest; private ClientMessages clientMessages; @Test public void testGetDisplayName() { EasyMock.expect(mockMessageResourse.getMessage("ClassB.title")).andReturn("someTitle"); clientMessages = new ClientMessages(mockMessageResourse); classToTest = new ClassB(); Assert.assertEquals("someTitle" , classToTest.getDisplayName()); EasyMock.replay(mockMessageResourse); } } When I'm running this this test I'm getting following exception: java.lang.IllegalStateException: missing behavior definition for the preceding method call getMessage("title") While debugging what I found is, it's not considering the mock method call mockMessageResourse.getMessage("ClassB.title") as it has been called from the construtor (ClassB object creation). Can any one please help me how to test in this case. Thanks.

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • TDD test data loading methods

    - by Dave Hanson
    I am a TDD newb and I would like to figure out how to test the following code. I am trying to write my tests first, but I am having trouble for creating a test that touches my DataAccessor. I can't figure out how to fake it. I've done the extend the shipment class and override the Load() method; to continue testing the object. I feel as though I end up unit testing my Mock objects/stubs and not my real objects. I thought in TDD the unit tests were supposed to hit ALL of the methods on the object; however I can never seem to test that Load() code only the overriden Mock Load My tests were write an object that contains a list of orders based off of shipment number. I have an object that loads itself from the database. public class Shipment { //member variables protected List<string> _listOfOrders = new List<string>(); protected string _id = "" //public properties public List<string> ListOrders { get{ return _listOfOrders; } } public Shipment(string id) { _id = id; Load(); } //PROBLEM METHOD // whenever I write code that needs this Shipment object, this method tries // to hit the DB and fubars my tests // the only way to get around is to have all my tests run on a fake Shipment object. protected void Load() { _listOfOrders = DataAccessor.GetOrders(_id); } } I create my fake shipment class to test the rest of the classes methods .I can't ever test the Real load method without having an actual DB connection public class FakeShipment : Shipment { protected new void Load() { _listOfOrders = new List<string>(); } } Any thoughts? Please advise. Dave

    Read the article

  • asp.net mvc - How to create fake test objects quickly and efficiently

    - by Simon G
    Hi, I'm currently testing the controller in my mvc app and I'm creating a fake repository for testing. However I seem to be writing more code and spending more time for the fakes than I do on the actual repositories. Is this right? The code I have is as follows: Controller public partial class SomeController : Controller { IRepository repository; public SomeController(IRepository rep) { repository = rep; } public virtaul ActionResult Index() { // Some logic var model = repository.GetSomething(); return View(model); } } IRepository public interface IRepository { Something GetSomething(); } Fake Repository public class FakeRepository : IRepository { private List<Something> somethingList; public FakeRepository(List<Something> somethings) { somthingList = somthings; } public Something GetSomething() { return somethingList; } } Fake Data class FakeSomethingData { public static List<Something> CreateSomethingData() { var somethings = new List<Something>(); for (int i = 0; i < 100; i++) { somethings.Add(new Something { value1 = String.Format("value{0}", i), value2 = String.Format("value{0}", i), value3 = String.Format("value{0}", i) }); } return somethings; } } Actual Test [TestClass] public class SomethingControllerTest { SomethingController CreateSomethingController() { var testData = FakeSomethingData.CreateSomethingData(); var repository = new FakeSomethingRepository(testData); SomethingController controller = new SomethingController(repository); return controller; } [TestMethod] public void SomeTest() { // Arrange var controller = CreateSomethingController(); // Act // Some test here // Arrange } } All this seems to be a lot of extra code, especially as I have more than one repository. Is there a more efficient way of doing this? Maybe using mocks? Thanks

    Read the article

  • Web Automation Tool

    - by Aaron
    I've realized I need a full-fledged browser automation tool for testing user interactions with our JavaScript widget library. I was using qunit, starting with unit testing and then I unwisely started incorporating more and more functional tests. That was a bad idea: trying to simulate a lot of user actions with JavaScript. The timing issues have gotten out of control and have made the suite too brittle. Now I spend more time fixing the tests, then I do developing. Is it possible to find a browser automation tool that works in: Windows XP: IE6,7,8, FF3 OSX: Safari, FF3 ? I've looked into SeleniumIDE and RC, but there seems to be some IE8 problems. I've also seen some things about Google's WebDriver, which confusingly seems to work with Selenium. Our organziation has licenses for IBM's Rational Functional Tester, but I don' think that will work on the MAC. The idea is to try to run tests on all the browsers our organization supports. Doable? Are my requirements unrealistic? Any recommendations as far as software to try? Thanks!

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Why not lump all service classes into a Factory method (instead of injecting interfaces)?

    - by Andrew
    We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces: IEmployeeService IDepartmentService IOrderService ... All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services: public EmployeeService : IEmployeeService { private readonly IOrderService _orderSvc; private readonly IDepartmentService _deptSvc; private readonly IEmployeeRepository _empRep; public EmployeeService(IOrderService orderSvc , IDepartmentService deptSvc , IEmployeeRepository empRep) { _orderSvc = orderSvc; _deptSvc = deptSvc; _empRep = empRep; } //methods down here } This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead? i.e. public ServiceFactory { virtual IEmployeeService GetEmployeeService(); virtual IDepartmentService GetDepartmentService(); virtual IOrderService GetOrderService(); } Then instead of calling: _orderSvc.CalcOrderTotal(orderId) we'd call _svcFactory.GetOrderService.CalcOrderTotal(orderid) What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more. Thanks for your thoughts!

    Read the article

  • N-tier architecture and unit tests (using Java)

    - by Alexandre FILLATRE
    Hi there, I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one. Here's my question. At which layer should the validation occur, and how ? My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all. BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem. What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed. Thanks a lot. Regards.

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • Unit tests - The benefit from unit tests with contract changes?

    - by Stefan Hendriks
    Recently I had an interesting discussion with a colleague about unit tests. We where discussing when maintaining unit tests became less productive, when your contracts change. Perhaps anyone can enlight me how to approach this problem. Let me elaborate: So lets say there is a class which does some nifty calculations. The contract says that it should calculate a number, or it returns -1 when it fails for some reason. I have contract tests who test that. And in all my other tests I stub this nifty calculator thingy. So now I change the contract, whenever it cannot calculate it will throw a CannotCalculateException. My contract tests will fail, and I will fix them accordingly. But, all my mocked/stubbed objects will still use the old contract rules. These tests will succeed, while they should not! The question that rises, is that with this faith in unit testing, how much faith can be placed in such changes... The unit tests succeed, but bugs will occur when testing the application. The tests using this calculator will need to be fixed, which costs time and may even be stubbed/mocked a lot of times... How do you think about this case? I never thought about it thourougly. In my opinion, these changes to unit tests would be acceptable. If I do not use unit tests, I would also see such bugs arise within test phase (by testers). Yet I am not confident enough to point out what will cost more time (or less). Any thoughts?

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Prove correctness of unit test

    - by Timo Willemsen
    I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests For example, I have this class (not including the implementation, and I have simplified it) public class SimpleGraph(){ //Returns true on success public boolean addEdge(Vertex v1, Vertex v2) { ... } //Returns true on sucess public boolean addVertex(Vertex v1) { ... } } I also have created this unit tests @Test public void SimpleGraph_addVertex_noSelfLoopsAllowed(){ SimpleGraph g = new SimpleGraph(); Vertex v1 = new Vertex('Vertex 1'); actual = g.addVertex(v1); boolean expected = false; boolean actual = g.addEdge(v1,v1); Assert.assertEquals(expected,actual); } Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.). So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.

    Read the article

  • CodeIgniter and SimpleTest -- How to make my first test?

    - by Smandoli
    I'm used to web development using LAMP, PHP5, MySQL plus NetBeans with Xdebug. Now I want to improve my development, by learning how to use (A) proper testing and (B) a framework. So I have set up CodeIgniter, SimpleTest and the easy Xdebug add-in for Firefox. This is great fun because maroonbytes provided me with clear instructions and a configured setup ready for download. I am standing on the shoulders of giants, and very grateful. I've used SimpleTest a bit in the past. Here is a the kind of thing I wrote: <?php require_once('../simpletest/unit_tester.php'); require_once('../simpletest/reporter.php'); class TestOfMysqlTransaction extends UnitTestCase { function testDB_ViewTable() { $this->assertEqual(1,1); // a pseudo-test } } $test = new TestOfMysqlTransaction(); $test->run(new HtmlReporter()) ?> So I hope I know what a test looks like. What I can't figure out is where and how to put a test in my new setup. I don't see any sample tests in the maroonbytes package, and Google so far has led me to posts that assume unit testing is already functionally available. What do I do?

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • Unit test approach for generic classes/methods

    - by Greg
    Hi, What's the recommended way to cover off unit testing of generic classes/methods? For example (referring to my example code below). Would it be a case of have 2 or 3 times the tests to cover testing the methods with a few different types of TKey, TNode classes? Or is just one class enough? public class TopologyBase<TKey, TNode, TRelationship> where TNode : NodeBase<TKey>, new() where TRelationship : RelationshipBase<TKey>, new() { // Properties public Dictionary<TKey, NodeBase<TKey>> Nodes { get; private set; } public List<RelationshipBase<TKey>> Relationships { get; private set; } // Constructors protected TopologyBase() { Nodes = new Dictionary<TKey, NodeBase<TKey>>(); Relationships = new List<RelationshipBase<TKey>>(); } // Methods public TNode CreateNode(TKey key) { var node = new TNode {Key = key}; Nodes.Add(node.Key, node); return node; } public void CreateRelationship(NodeBase<TKey> parent, NodeBase<TKey> child) { . . .

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • apt-get install and update fail

    - by sepehr
    I've got a problem with apt-get update and apt-get install ... commands . every time update or installing fails and errors are : Get:1 http://dl.google.com stable Release.gpg [198B] Ign http://dl.google.com/linux/chrome/deb/ stable/main Translation-en_US Get:2 http://dl.google.com stable Release [1,347B] Get:3 http://dl.google.com stable/main Packages [1,227B] Err http://32.repository.backtrack-linux.org revolution Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://32.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://all.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution Release Ign http://all.repository.backtrack-linux.org revolution Release Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://32.repository.backtrack-linux.org revolution/testing Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Ign http://32.repository.backtrack-linux.org revolution/main Packages Ign http://32.repository.backtrack-linux.org revolution/microverse Packages Ign http://32.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/main Packages Ign http://all.repository.backtrack-linux.org revolution/microverse Packages Ign http://all.repository.backtrack-linux.org revolution/non-free Packages Ign http://all.repository.backtrack-linux.org revolution/testing Packages Err http://all.repository.backtrack-linux.org revolution/main Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to all.repository.backtrack-linux.org:http: Ign http://32.repository.backtrack-linux.org revolution/testing Packages Err http://32.repository.backtrack-linux.org revolution/main Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://all.repository.backtrack-linux.org revolution/testing Packages Unable to connect to all.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://32.repository.backtrack-linux.org revolution/testing Packages Unable to connect to 32.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) Err http://source.repository.backtrack-linux.org/ revolution/main Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/microverse Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/non-free Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org/ revolution/testing Translation-en_US Unable to connect to source.repository.backtrack-linux.org:http: Ign http://source.repository.backtrack-linux.org revolution Release Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Ign http://source.repository.backtrack-linux.org revolution/main Packages Ign http://source.repository.backtrack-linux.org revolution/microverse Packages Ign http://source.repository.backtrack-linux.org revolution/non-free Packages Ign http://source.repository.backtrack-linux.org revolution/testing Packages Err http://source.repository.backtrack-linux.org revolution/main Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/microverse Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/non-free Packages Unable to connect to source.repository.backtrack-linux.org:http: Err http://source.repository.backtrack-linux.org revolution/testing Packages Unable to connect to source.repository.backtrack-linux.org:http: Fetched 2,772B in 1min 3s (44B/s) W: Failed to fetch http://all.repository.backtrack- \linux.org/dists/revolution/Release.gpg Could not connect to all.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to 32.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/Release.gpg Could not connect to source.repository.backtrack-linux.org:80 (37.221.173.214). - connect (110: Connection timed out) W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/i18n/Translation-en_US.bz2 Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://all.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to all.repository.backtrack-linux.org:http: W: Failed to fetch http://32.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to 32.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/main/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/microverse/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/non-free/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: W: Failed to fetch http://source.repository.backtrack-linux.org/dists/revolution/testing/binary-i386/Packages.gz Unable to connect to source.repository.backtrack-linux.org:http: E: Some index files failed to download, they have been ignored, or old ones used instead. I Don't know how to get out of this ! I want to install RPM and YUM package on my backtrack ! I also searched over internet for answer . in backtrack forums or any other sites or weblogs i could'nt find a good answer ! can anyone help ??

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Why are 32-bit application pools more efficient in IIS? [closed]

    - by mhenry1384
    I've been running load tests with two different ASP.NET web applications in IIS. The tests are run with 5,10,25, and 250 user agents. Tested on a box with 8 GB RAM, Windows 7 Ultimate x64. The same box running both IIS and the load test project. I did many runs, and the data is very consistent. For every load, I see a lower "Avg. Page Time (sec)" and a lower "Avg. Response Time (sec)" if I have "Enable 32-bit Applications" set to True in the Application Pools. The difference gets more pronounced the higher the load. At very high loads, the web applications start to throw errors (503) if the application pools are 64-bit, but they can can keep up if set to 32-bit. Why are 32-bit app pools so much more efficient? Why isn't the default for application pools 32-bit?

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >