Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 4/192 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Executing NUnit Tests using the Visual Studio 2012 Test Runner

    - by David Paquette
    At a recent Visual Studio 2012 event at the Calgary .NET User Group, I was told that I could run my NUnit tests directly in the Visual Studio 2012 without any special plugins.  Naturally, I was very excited and I immediately tried running my NUnit tests. I was somewhat disappointed to see that the Test Runner did not discover any of my NUnit tests.  Apparently, you do still need to install an extension that supports NUnit.  Microsoft has completely re-written the Test Runner in Visual Studio 2012 and opened it up for anyone to write Test Adapters for any unit test framework (not just MSTest).  Once the correct test adapters are installed, everything works great.  Luckily, there are a good number of adapters already written. Here are some Test Adapters that you might find useful: NUnit Test Adapter – This one is still in beta, but tit does work with the official Visual Studio 2012 release xUnit.net Test Adapter Silverlight Unit Test Adapter Chutzpah Test Adapter Overall, I still prefer the unit test runner in ReSharper, but this is a great new feature for those who might not have a ReSharper license.

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • How do I run all my PHPUnit tests?

    - by JJ
    I have script called Script.php and tests for it in Tests/Script.php, but when I run phpunit Tests it does not execute any tests in my test file. How do I run all my tests with phpunit? PHPUnit 3.3.17, PHP 5.2.6-3ubuntu4.2, latest Ubuntu Output: $ phpunit Tests PHPUnit 3.3.17 by Sebastian Bergmann. Time: 0 seconds OK (0 tests, 0 assertions) And here are my script and test files: Script.php <?php function returnsTrue() { return TRUE; } ?> Tests/Script.php <?php require_once 'PHPUnit/Framework.php'; require_once 'Script.php' class TestingOne extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } class TestingTwo extends PHPUnit_Framework_TestCase { public function testTrue() { $this->assertEquals(TRUE, returnsTrue()); } public function testFalse() { $this->assertEquals(FALSE, returnsTrue()); } } ?>

    Read the article

  • Any way to separate unit tests from integration tests in VS2008?

    - by AngryHacker
    I have a project full of tests, unit and integration alike. Integration tests require that a pretty large database be present, so it's difficult to make it a part of the build process simply because of the time that it takes to re-initialize the database. Is there a way to somehow separate unit tests from integration tests and have the build server just run the unit tests? I see that there is an Ordered Unit test in VS2008, which allows you to pick and choose tests, but I can't make it just execute alone, without all the others. Is there a trick that I am missing? Or perhaps I could adorn the unit tests with an attribute? What are some of the approaches people are using? P.S. I know I could use mocking for integration tests (just to make them go faster) but then it wouldn't be a true integration test.

    Read the article

  • Unit tests - The benefit from unit tests with contract changes?

    - by Stefan Hendriks
    Recently I had an interesting discussion with a colleague about unit tests. We where discussing when maintaining unit tests became less productive, when your contracts change. Perhaps anyone can enlight me how to approach this problem. Let me elaborate: So lets say there is a class which does some nifty calculations. The contract says that it should calculate a number, or it returns -1 when it fails for some reason. I have contract tests who test that. And in all my other tests I stub this nifty calculator thingy. So now I change the contract, whenever it cannot calculate it will throw a CannotCalculateException. My contract tests will fail, and I will fix them accordingly. But, all my mocked/stubbed objects will still use the old contract rules. These tests will succeed, while they should not! The question that rises, is that with this faith in unit testing, how much faith can be placed in such changes... The unit tests succeed, but bugs will occur when testing the application. The tests using this calculator will need to be fixed, which costs time and may even be stubbed/mocked a lot of times... How do you think about this case? I never thought about it thourougly. In my opinion, these changes to unit tests would be acceptable. If I do not use unit tests, I would also see such bugs arise within test phase (by testers). Yet I am not confident enough to point out what will cost more time (or less). Any thoughts?

    Read the article

  • Measuring Usability with Common Industry Format (CIF) Usability Tests

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience A User-centered Research and Design Process The Oracle Fusion Applications user experience was five years in the making. The development of this suite included an extensive and comprehensive user experience design process: ethnographic research, low-fidelity workflow prototyping, high fidelity user interface (UI) prototyping, iterative formative usability testing, development feedback and iteration, and sales and customer evaluation throughout the design cycle. However, this process does not stop when our products are released. We conduct summative usability testing using the ISO 25062 Common Industry Format (CIF) for usability test reports as an organizational framework. CIF tests allow us to measure the overall usability of our released products.  These studies provide benchmarks that allow for comparisons of a specific product release against previous versions of our product and against other products in the marketplace. What Is a CIF Usability Test? CIF refers to the internationally standardized method for reporting usability test findings used by the software industry. The CIF is based on a formal, lab-based test that is used to benchmark the usability of a product in terms of human performance and subjective data. The CIF was developed and is endorsed by more than 375 software customer and vendor organizations led by the National Institute for Standards and Technology (NIST), a US government entity. NIST sponsored the CIF through the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards-making processes. Oracle played a key role in developing the CIF. The CIF report format and metrics are consistent with the ISO 9241-11 definition of usability: “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” Our goal in conducting CIF tests is to measure performance and satisfaction of a representative sample of users on a set of core tasks and to help predict how usable a product will be with the larger population of customers. Why Do We Perform CIF Testing? The overarching purpose of the CIF for usability test reports is to promote incorporation of usability as part of the procurement decision-making process for interactive products. CIF provides a common format for vendors to report the methods and results of usability tests to customer organizations, and enables customers to compare the usability of our software to that of other suppliers. CIF also enables us to compare our current software with previous versions of our software. CIF Testing for Fusion Applications Oracle Fusion Applications comprises more than 100 modules in seven different product families. These modules encompass more than 400 task flows and 400 user roles. Due to resource constraints, we cannot perform comprehensive CIF testing across the entire product suite. Therefore, we had to develop meaningful inclusion criteria and work with other stakeholders across the applications development organization to prioritize product areas for testing. Ultimately, we want to test the product areas for which customers might be most interested in seeing CIF data. We also want to build credibility with customers; we need to be able to make the case to current and prospective customers that the product areas tested are representative of the product suite as a whole. Our goal is to test the top use cases for each product. The primary activity in the scoping process was to work with the individual product teams to identify the key products and business process task flows in each product to test. We prioritized these products and flows through a series of negotiations among the user experience managers, product strategy, and product management directors for each of the primary product families within the Oracle Fusion Applications suite (Human Capital Management, Supply Chain Management, Customer Relationship Management, Financials, Projects, and Procurement). The end result of the scoping exercise was a list of 47 proposed CIF tests for the Fusion Applications product suite.  Figure 1. A participant completes tasks during a usability test in Oracle’s Usability Labs Fusion Supplier Portal CIF Test The first Fusion CIF test was completed on the Supplier Portal application in July of 2011.  Fusion Supplier Portal is part of an integrated suite of Procurement applications that helps supplier companies manage orders, schedules, shipments, invoices, negotiations and payments. The user roles targeted for the usability study were Supplier Account Receivables Specialists and Supplier Sales Representatives, including both experienced and inexperienced users across a wide demographic range.  The test specifically focused on the following functionality and features: Manage payments – view payments Manage invoices – view invoice status and create invoices Manage account information – create new contact, review bank account information Manage agreements – find and view agreement, upload agreement lines, confirm status of agreement lines upload Manage purchase orders (PO) – view history of PO, request change to PO, find orders Manage negotiations – respond to request for a quote, check the status of a negotiation response These product areas were selected to represent the most important subset of features and functionality of the flow, in terms of frequency and criticality of use by customers. A total of 20 users participated in the usability study. The results of the Supplier Portal evaluation were favorable and exceeded our expectations. Figure 2. Fusion Supplier Portal Next Studies We plan to conduct two Fusion CIF usability studies per product family over the next nine months. The next product to be tested will be Self-service Procurement. End users are currently being recruited to participate in this usability study, and the test sessions are scheduled to begin during the last week of November.

    Read the article

  • Adding unit tests to a legacy, plain C project

    - by Groo
    The title says it all. My company is reusing a legacy firmware project for a microcontroller device, written completely in plain C. There are parts which are obviously wrong and need changing, and coming from a C#/TDD background I don't like the idea of randomly refactoring stuff with no tests to assure us that functionality remains unchanged. Also, I've seen that hard to find bugs were introduced in many occasions through slightest changes (which is something which I believe would be fixed if regression testing was used). A lot of care needs to be taken to avoid these mistakes: it's hard to track a bunch of globals around the code. To summarize: How do you add unit tests to existing tightly coupled code before refactoring? What tools do you recommend? (less important, but still nice to know) I am not directly involved in writing this code (my responsibility is an app which will interact with the device in various ways), but it would be bad if good programming principles were left behind if there was a chance they could be used.

    Read the article

  • RSpec + Selenium tests for .NET on Windows

    - by John
    I'm a Rails developer doing TDD on a Mac with RSpec, Capybara and Selenium webdriver. Now I have been asked by my company to use this approach for a .NET on Windows environment. What is the best way of doing this? I could just install Ruby and use RSPEC, Capybara and Selenium webdriver for integration testing. But what about unit tests? I also looked at NSpec, but I'm not sure if I can combine that with Capybara or Selenium for integration tests. What would be a good approach here?

    Read the article

  • Run Tests in Folder

    - by Tomas Mysik
    Hi all, today we would like to show you another minor improvement we have prepared for NetBeans 7.2. Today, let's talk a little bit about testing. This minor improvement will be useful especially for users who have a lot of unit tests (it means all of us, right? ;) - just right click on any folder underneath Test Files node and you will notice: The result is as expected - all the tests from the given folder are run: That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent PHPUnit).

    Read the article

  • Visual Studio Load Tests Virtual Users Simulation

    - by Eldar
    Hello, I'm currently working on writing a load testing application that takes advantage of Load Test using Visual Studio 2010. The load test will simulate 20 users on the same machine, and I need some data to be shared in-memory between all simulated users. I was suprised I couldn't find documentation answering the following question: What seperates each virtual user's running context from the other? Does each virtual user runs the tests in its own process? Maybe in its own app domain? Or just on its own thread? I need to know because if each user is running tests in its own process then all the in-memory cache isn't shared and is created for each user instead of one time for all of them, which is bad for me.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Run unittest in a Class

    - by chrissygormley
    Hello, I have a test suite to perform smoke tests. I have all my script stored in various classes but when I try and run the test suite I can't seem to get it working if it is in a class. The code is below: (a class to call the tests) from alltests import SmokeTests class CallTests(SmokeTests): def integration(self): self.suite() if __name__ == '__main__': run = CallTests() run.integration() And the test suite: class SmokeTests(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('external_sanity', 'internal_sanity') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests if __name__ == '__main__': unittest.main(defaultTest='suite') So I can see how to call a normal function defined but I'm finding it difficult calling in the suite. In one of the tests the suite is set up like so: class InternalSanityTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeInternalSanityTestSuite(): suite = unittest.TestSuite() suite.addTest(TestInternalSanity("BasicInternalSanity")) suite.addTest(TestInternalSanity("VerifyInternalSanityTestFail")) return suite def suite(): return unittest.makeSuite(TestInternalSanity) If I have def suite() inside the class SmokeTests the script executes but the tests don't run but if I remove the class the tests run. I run this as a script and call in variables into the tests. I do not want to have to run the tests by os.system('python tests.py'). I was hoping to call the tests through the class I have like any other function. This need's to be called from a class as the script that I'm calling it from is Object Oriented. If anyone can get the code to be run using Call Tests I would appreciate it alot. Thanks for any help in advance.

    Read the article

  • How exactly do MbUnit's [Parallelizable] and DegreeOfParallelism work?

    - by BenA
    I thought I understood how MbUnit's parallel test execution worked, but the behaviour I'm seeing differs sufficiently much from my expectation that I suspect I'm missing something! I have a set of UI tests that I wish to run concurrently. All of the tests are in the same assembly, split across three different namespaces. All of the tests are completely independent of one another, so I'd like all of them to be eligible for parallel execution. To that end, I put the following in the AssemblyInfo.cs: [assembly: DegreeOfParallelism(8)] [assembly: Parallelizable(TestScope.All)] My understanding was that this combination of assembly attributes should cause all of the tests to be considered [Parallelizable], and that the test runner should use 8 threads during execution. My individual tests are marked with the [Test] attribute, and nothing else. None of them are data-driven. However, what I actually see is at most 5-6 threads being used, meaning that my test runs are taking longer than they should be. Am I missing something? Do I need to do anything else to ensure that all of my 8 threads are being used by the runner? N.B. The behaviour is the same irrespective of which runner I use. The GUI, command line and TD.Net runners all behave the same as described above, again leading me to think I've missed something. EDIT: As pointed out in the comments, I'm running v3.1 of MbUnit (update 2 build 397). The documentation suggests that the assembly level [parallelizable] attribute is available, but it does also seem to reference v3.2 of the framework despite that not yet being available. EDIT 2: To further clarify, the structure of my assembly is as follows: assembly - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute) - namespace - fixture - tests (each carrying only the [Test] attribute) - fixture - tests (each carrying only the [Test] attribute)

    Read the article

  • Run Your Tests With Any NUnit Version

    - by Alois Kraus
    I always thought that the NUnit test runners and the test assemblies need to reference the same NUnit.Framework version. I wanted to be able to run my test assemblies with the newest GUI runner (currently 2.5.3). Ok so all I need to do is to reference both NUnit versions the newest one and the official for the current project. There is a nice article form Kent Bogart online how to reference the same assembly multiple times with different versions. The magic works by referencing one NUnit assembly with an alias which does prefix all types inside it. Then I could decorate my tests with the TestFixture and Test attribute from both NUnit versions and everything worked fine except that this was ugly. After playing a little bit around to make it simpler I found that I did not need to reference both NUnit.Framework assemblies. The test runners do not require the TestFixture and Test attribute in their specific version. That is really neat since the test runners are instructed by attributes what to do in a declarative way there is really no need to tie the runners to a specific version. At its core NUnit has this little method hidden to find matching TestFixtures and Tests   public bool CanBuildFrom(Type type) {     if (!(!type.IsAbstract || type.IsSealed))     {         return false;     }     return (((Reflect.HasAttribute(type,           "NUnit.Framework.TestFixtureAttribute", true) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestAttribute"       , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TestCaseAttribute"   , true)) ||               Reflect.HasMethodWithAttribute(type, "NUnit.Framework.TheoryAttribute"     , true)); } That is versioning and backwards compatibility at its best. I tell NUnit what to do by decorating my tests classes with NUnit Attributes and the runner executes my intent without the need to bind me to a specific version. The contract between NUnit versions is actually a bit more complex (think of AssertExceptions) but this is also handled nicely by using not the concrete type but simply to check for the catched exception type by string. What can we learn from this? Versioning can be easy if the contract is small and the users of your library use it in a declarative way (Attributes). Everything beyond it will force you to reference several versions of the same assembly with all its consequences. Type equality is lost between versions so none of your casts will work. That means that you cannot simply use IBigInterface in two versions. You will need a wrapper to call the correct versioned one. To get out of this mess you can use one (and only one) version agnostic driver to encapsulate your business logic from the concrete versions. This is of course more work but as NUnit shows it can be easy. Simplicity is therefore not a nice thing to have but also requirement number one if you intend to make things more complex in version two and want to support any version (older and newer). Any interaction model above easy will not be maintainable. There are different approached to versioning. Below are my own personal observations how versioning works within the  .NET Framwork and NUnit.   Versioning Models 1. Bug Fixing and New Isolated Features When you only need to fix bugs there is no need to break anything. This is especially true when you have a big API surface. Microsoft did this with the .NET Framework 3.0 which did leave the CLR as is but delivered new assemblies for the features WPF, WCF and Windows Workflow Foundations. Their basic model was that the .NET 2.0 assemblies were declared as red assemblies which must not change (well mostly but each change was carefully reviewed to minimize the risk of breaking changes as much as possible) whereas the new green assemblies of .NET 3,3.5 did not have such obligations since they did implement new unrelated features which did not have any impact on the red assemblies. This is versioning strategy aimed at maximum compatibility and the delivery of new unrelated features. If you have a big API surface you should strive hard to do the same or you will break your customers code with every release. 2. New Breaking Features There are times when really new things need to be added to an existing product. The .NET Framework 4.0 did change the CLR in many ways which caused subtle different behavior although the API´s remained largely unchanged. Sometimes it is possible to simply recompile an application to make it work (e.g. changed method signature void Func() –> bool Func()) but behavioral changes need much more thought and cannot be automated. To minimize the impact .NET 2.0,3.0,3.5 applications will not automatically use the .NET 4.0 runtime when installed but they will keep using the “old” one. What is interesting is that a side by side execution model of both CLR versions (2 and 4) within one process is possible. Key to success was total isolation. You will have 2 GCs, 2 JIT compilers, 2 finalizer threads within one process. The two .NET runtimes cannot talk  (except via the usual IPC mechanisms) to each other. Both runtimes share nothing and run independently within the same process. This enables Explorer plugins written for the CLR 2.0 to work even when a CLR 4 plugin is already running inside the Explorer process. The price for isolation is an increased memory footprint because everything is loaded and running two times.   3. New Non Breaking Features It really depends where you break things. NUnit has evolved and many different Assert, Expect… methods have been added. These changes are all localized in the NUnit.Framework assembly which can be easily extended. As long as the test execution contract (TestFixture, Test, AssertException) remains stable it is possible to write test executors which can run tests written for NUnit 10 because the execution contract has not changed. It is possible to write software which executes other components in a version independent way but this is only feasible if the interaction model is relatively simple.   Versioning software is hard and it looks like it will remain hard since you suddenly work in a severely constrained environment when you try to innovate and to keep everything backwards compatible at the same time. These are contradicting goals and do not play well together. The easiest way out of this is to carefully watch what your customers are doing with your software. Minimizing the impact is much easier when you do not need to guess how many people will be broken when this or that is removed.

    Read the article

  • How can I decide what to test manually, and what to trust to automated tests?

    - by bhazzard
    We have a ton of developers and only a few QA folks. The developers have been getting more involved in qa throughout the development process by writing automated tests, but our QA practices are mostly manual. What I'd love is if our development practices were BDD and TDD and we grew a robust test suite. The question is: While building such a testing suite, how can we decide what we can trust to the tests, and what we should continue testing manually?

    Read the article

  • Rescue overdue offshore projects and convince management to use automated tests

    - by oazabir
    I have published two articles on codeproject recently. One is a story where an offshore project was two months overdue, my friend who runs it was paying the team from his own pocket and he was drowning in ever increasing number of change requests and how we brainstormed together to come out of that situation. Tips and Tricks to rescue overdue projects Next one is about convincing management to go for automated test and give developers extra time per sprint, at the cost of reduced productivity for couple of sprints. It’s hard to negotiate this with even dev leads, let alone managers. Whenever you tell them - there’s going to be less features/bug fixes delivered for next 3 or 4 sprints because we want to automate the tests and reduce manual QA effort; everyone gets furious and kicks you out of the meeting. Especially in a startup where every sprint is jam packed with new features and priority bug fixes to satisfy various stakeholders, including the VCs, it’s very hard to communicate the benefits of automated tests across the board. Let me tell you of a story of one of my startups where I had the pleasure to argue on this and came out victorious. How to convince developers and management to use automated test instead of manual test If you like these, please vote for me!

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Writing the tests for FluentPath

    - by Latest Microsoft Blogs
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against Read More......(read more)

    Read the article

  • New OBI 11g on-line Sales & Pre-sales Partner Assessment Tests

    - by Mike.Hallett(at)Oracle-BI&EPM
    Our OBI partners can now update their specialisation certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g.   New OPN on-line Sales & Pre-sales Assessment Tests Available Oracle Business Intelligence Foundation Suite 11g Sales Specialist   Oracle Business Intelligence Foundation Suite 11g PreSales Specialist   Oracle Business Intelligence Foundation Suite 11g Support Specialist

    Read the article

  • New OBI 11G Online Sales & Pre-Sales Partner Assessment Tests

    - by Cinzia Mascanzoni
    OBI partners can now update their specialization certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g. New OPN on-line Sales & Pre-sales Assessment Tests Available Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Business Intelligence Foundation Suite 11g Support Specialist Read more on Specialization

    Read the article

  • Converting NUnit tests to MSUnit.

    - by TATWORTH
    I created the MSTest project by creating a new class library project and copying the test classes to it. I then followed the instructions in the following posts.http://social.msdn.microsoft.com/Forums/en/vststest/thread/eeb42224-bc1f-476d-98b4-93d0daf44aadhttp://dangerz.blogspot.co.uk/2012/01/converting-nunit-to-mstest.htmlHowever I did not need to add the GUID fix as I used ReSharper to run both sets of tests.

    Read the article

  • Should selenium tests be written in imperative style?

    - by Amogh Talpallikar
    Is an automation tester supposed to know concepts of OOPS and design patterns to write Tests in a way where changes & code re-use are possible? For example, I pick up Java to write cucumber step definitions that instruct a selenium webdriver. Should I be using a lot of inheritance, interfaces, delegation etc. to make life easier or would that be overly complicated for something that should just line by line instructions?

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >