Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 64/467 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Oracle 64-bit assembly throws BadImageFormatException when running unit tests

    - by pjohnson
    We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:Test method MyProject.Test.SomeTest threw exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

    Read the article

  • Alternatives to Google Website Optimizer

    - by yahelc
    What (affordable) alternatives are there to Google Website Optimizer for A/B and multivariate tests? The pro's with GWO are basically that its free and that it integrates with Google Analytics. The cons: The relative high time cost of setting up a test. Some alternatives I've seen so far: Optimizely.com VisualWebsiteOptimizer.com Genetify (wiki.github.com/gregdingle/genetify/) Free, open-source, but seems like there's no developer community committed to the project.

    Read the article

  • Unit test and Code Coverage of Ant build scripts

    - by pablaasmo
    In our development environment We have more and more build scripts for ant to perform the build tasks for several different build jobs. These build scripts sometimes become large and do a lot of things and basically is source code in and of itself. So in a "TDD-world" we should have unit tests and coverage reports for the source code. I found AntUnit and BuildFileTest.java for doing unit tests. But it would also be interesting to know the code coverage of those unit tests. I have been searching google, but have not found anything. Does anyone know of a code coverage tool for Ant build scripts?

    Read the article

  • How to deal with static utility classes when designing for testability

    - by Benedikt
    We are trying to design our system to be testable and in most parts developed using TDD. Currently we are trying to solve the following problem: In various places it is necessary for us to use static helper methods like ImageIO and URLEncoder (both standard Java API) and various other libraries that consist mostly of static methods (like the Apache Commons libraries). But it is extremely difficult to test those methods that use such static helper classes. I have several ideas for solving this problem: Use a mock framework that can mock static classes (like PowerMock). This may be the simplest solution but somehow feels like giving up. Create instantiable wrapper classes around all those static utilities so they can be injected into the classes that use them. This sounds like a relatively clean solution but I fear we'll end up creating an awful lot of those wrapper classes. Extract every call to these static helper classes into a function that can be overridden and test a subclass of the class I actually want to test. But I keep thinking that this just has to be a problem that many people have to face when doing TDD - so there must already be solutions for this problem. What is the best strategy to keep classes that use these static helpers testable?

    Read the article

  • Jasmine BDD vs Integration Tests

    - by lfender6445
    Lets say I need to write a test for the front end. A user visits buysomething.com, saves something to their wishlist, and a saved item count is updated. DOM gets manipulated. In my heart I feel this is better suited as an integration test - but my team is currently using jasmine to load fixtures and test such interactions. This leads to extremely brittle tests as they are reliant on a static fixture instead of the actual markup. Are we misusing jasmine here?

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Is Test Driven Development viable in game development?

    - by Will Marcouiller
    As being Scrum certified, I tend to prone for Agile methodologies while developping a system, and even use some canvas from the Scrum framework to manage my day-to-day work. Besides, I am wondering whether TDD is an option in game development, if it is viable? If I believe this GD question, TDD is not much of a use in game development. Why are MVC & TDD not employed more in game architecture? I come from industrial programming where big projects with big budgets need to work flawlessly, as it could result to catastrophic scenarios if the code wasn't throroughly tested inside and out. Plus, following Scrum rules encourages meeting the due dates of your work while every single action in Scrum is time-boxed! So, I agree when in the question linked above they say to stop trying to build a system, and start writing the game. It is quite what Scrum says, try not to build the perfect system, first: make it work by the Sprint end. Then, refactor the code while working in the second Sprint if needed! I understand that if not all departments responsible for the game development use Scrum, Scrum becomes useless. But let's consider for a moment that all the departments do use Scrum... I think that TDD would be good to write bug-free code, though you do not want to write the "perfect" system/game. So my question is the following: Is TDD viable in game development anyhow?

    Read the article

  • Microsoft SDET position

    - by Mark
    I was curious about MS's SDET position. I've heard a lot of people speak negatively and positively about this position. I was wondering if any current or previous SDETs could comment on a couple of issues. 1) Is career development in any way hurt by this position within and outside of MS? 1.5) Is it harder to get hired as a developer at another company after being an SDET? 2) Within MS culture, how is the SDET position viewed with respect to PM or SDE? Is it respected or looked down upon? 3) If you worked as an SDET, did you like it?

    Read the article

  • Oracle UPK and IBM Rational Quality Manager

    - by marc.santosusso
    Did you know that you can import UPK topics into IBM Rational Quality Manager (RQM) as Test Scripts? Attached below is a ZIP of files which contains a customized style (for all supported languages) for creating spreadsheets that are compatible with IBM Rational Quality Manager, a sample IBM Rational Quality Manager mapping file, and a best practice document. UPK_Best_Practices_-_IBM_Rational_Quality_Manager_Integration.zip Extract the files and open the best practice document (PDF file) file to get started. Please note that the IBM Rational Quality Manager publishing style (the ODARC file) include with the above download was created using the customization instructions found within the UPK documentation. That said, it is not currently an "official" feature of the product, but rather an example of what can be created through style customization. Stay tuned for more details. We hope that you find this to be useful and welcome your feedback!

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • How do I make code bound to an ORM testable?

    - by RPK
    In Test Driven Development, how do I make code bound to an ORM testable? I am using a Micro-ORM (PetaPoco) and I have several methods that interact with the database like: AddCustomer UpdateRecord etc. I want to know how to write a test for these methods. I searched YouTube for videos on writing a test for DAL, but I didn't find any. I want to know which method or class is testable and how to write a test before writing the code itself.

    Read the article

  • Learning a new language using broken unit tests

    - by Brian MacKay
    I was listening to a dot net rocks the other day where they mentioned, almost in passing, a really intriguing tool for learning new languages -- I think they were specifically talking about F#. It's a solution you open up and there are a bunch of broken unit tests. Fixing them walks you through the steps of learning the language. I want to check it out, but I was driving in my car and I have no idea what the name of the project is or which dot net rocks episode it was. Google hasn't helped much. Any idea?

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • TDD vs. Productivity

    - by Nairou
    In my current project (a game, in C++), I decided that I would use Test Driven Development 100% during development. In terms of code quality, this has been great. My code has never been so well designed or so bug-free. I don't cringe when viewing code I wrote a year ago at the start of the project, and I have gained a much better sense for how to structure things, not only to be more easily testable, but to be simpler to implement and use. However... it has been a year since I started the project. Granted, I can only work on it in my spare time, but TDD is still slowing me down considerably compared to what I'm used to. I read that the slower development speed gets better over time, and I definitely do think up tests a lot more easily than I used to, but I've been at it for a year now and I'm still working at a snail's pace. Each time I think about the next step that needs work, I have to stop every time and think about how I would write a test for it, to allow me to write the actual code. I'll sometimes get stuck for hours, knowing exactly what code I want to write, but not knowing how to break it down finely enough to fully cover it with tests. Other times, I'll quickly think up a dozen tests, and spend an hour writing tests to cover a tiny piece of real code that would have otherwise taken a few minutes to write. Or, after finishing the 50th test to cover a particular entity in the game and all aspects of it's creation and usage, I look at my to-do list and see the next entity to be coded, and cringe in horror at the thought of writing another 50 similar tests to get it implemented. It's gotten to the point that, looking over the progress of the last year, I'm considering abandoning TDD for the sake of "getting the damn project finished". However, giving up the code quality that came with it is not something I'm looking forward to. I'm afraid that if I stop writing tests, then I'll slip out of the habit of making the code so modular and testable. Am I perhaps doing something wrong to still be so slow at this? Are there alternatives that speed up productivity without completely losing the benefits? TAD? Less test coverage? How do other people survive TDD without killing all productivity and motivation?

    Read the article

  • Colleague unwilling to use unit tests "as it's more to code"

    - by m.edmondson
    A colleague is unwilling to use unit tests and instead opting for a quick test, pass it to the users, and if all is well it is published live. Needless to say some bugs do get through. I mentioned we should think about using unit tests - but she was all against it once it was realised more code would have to be written. This leaves me in the position of modifying something and not being sure the output is the same, especially as her code is spaghetti and I try to refactor it when I get a chance. So whats the best way forward for me?

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • How does TDD address interaction between objects?

    - by Gigi
    TDD proponents claim that it results in better design and decoupled objects. I can understand that writing tests first enforces the use of things like dependency injection, resulting in loosely coupled objects. However, TDD is based on unit tests - which test individual methods and not the integration between objects. And yet, TDD expects design to evolve from the tests themselves. So how can TDD possibly result in a better design at the integration (i.e. inter-object) level when the granularity it addresses is finer than that (individual methods)?

    Read the article

  • The use of Test-Driven Development in Non-Greenfield Projects?

    - by JHarley1
    So here is a question for you, having read some great answers to questions such as "Test-Driven Development - Convince Me". So my question is: "Can Test-Driven Development be used effectively on non-Greenfield projects?" To specify: I would really like to know if people have had experience in using TDD in projects where there was already non-TDD elements present? And the problems that they then faced.

    Read the article

  • How to test the render speed of my solution in a web browser?

    - by Cuartico
    Ok, I need to test the speed of my solution in a web browser, but I have some problems, there are 2 versions of the web solution, the original one that is on server A and the "fixed" version that is on server B. I have VS2010 Ultimate, so I can make a web and load test on solution B, but I can't load the A solution on my IDE. I was trying to use fiddle2 and jmeter, but they only gave me the times of the request and response of the browsers with the server, I also want the time it takes to the browser to render the whole page. Maybe I'm misusing some of this tools... I don't know if this could be usefull but: Solution A is on VB 6.0 Solution B is on VB.Net Thanks in advance!

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Test case as a function or test case as a class

    - by GodMan
    I am having a design problem in test automation:- Requirements - Need to test different servers (using unix console and not GUI) through automation framework. Tests which I'm going to run - Unit, System, Integration Question: While designing a test case, I am thinking that a Test Case should be a part of a test suite (test suite is a class), just as we have in Python's pyunit framework. But, should we keep test cases as functions for a scalable automation framework or should be keep test cases as separate classes(each having their own setup, run and teardown methods) ? From automation perspective, Is the idea of having a test case as a class more scalable, maintainable or as a function?

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Is it OK to have multiple asserts in a single unit test?

    - by Restuta
    I think that there are some cases when multiple assertions are needed (e.g. Guard Assertion), but in general I try to avoid this. What is your opinion? Please provide a real word examples when multiple asserts are really needed. Thanks! Edit In the comment to this great post Roy Osherove pointed to the OAPT project that is designed to run each assert in a single test. This is written on projects home page: Proper unit tests should fail for exactly one reason, that’s why you should be using one assert per unit test. And also Roy wrote in comments: My guideline is usually that you test one logical CONCEPT per test. you can have multiple asserts on the same object. they will usually be the same concept being tested.

    Read the article

  • How can I improve my error checking and handling?

    - by Google
    Lately I have been struggling to understand what the right amount of checking is and what the proper methods are. I have a few questions regarding this: What is the proper way to check for errors (bad input, bad states, etc)? Is it better to explicitly check for errors, or use functions like asserts which can be optimized out of your final code? I feel like explicitly checking clutters a program with a lot of extra code which shouldn't be executed in most situations anyway-- and not to mention most errors end up with an abort/exit failure. Why clutter a function with explicit checks just to abort? I have looked for asserts versus explicit checking of errors and found little to truly explain when to do either. Most say 'use asserts to check for logic errors and use explicit checks to check for other failures.' This doesn't seem to get us very far though. Would we say this is feasible: Malloc returning null, check explictly API user inserting odd input for functions, use asserts Would this make me any better at error checking? What else can I do? I really want to improve and write better, 'professional' code.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >