Search Results

Search found 53818 results on 2153 pages for 'system testing'.

Page 8/2153 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • SceneManagers as systems in entity system or as a core class used by a system?

    - by Hatoru Hansou
    It seems entity systems are really popular here. Links posted by other users convinced me of the power of such system and I decided to try it. (Well, that and my original code getting messy) In my project, I originally had a SceneManager class that maintained needed logic and structures to organize the scene (QuadTree, 2D game). Before rendering I call selectRect() and pass the x,y of the camera and the width and height of the screen and then obtain a minimized list containing only visible entities ordered from back to front. Now with Systems, originally in my first attempt my Render system required to get added all entities it should handle. This may sound like the correct approach but I realized this was not efficient. Trying to optimize It I reused the SceneManager class internally in the Renderer system, but then I realized I needed methods such as selectRect() in others systems too (AI principally) and make the SceneManager accessible globally again. Currently I converted SceneManager to a system, and ended up with the following interface (only relevant methods): /// Base system interface class System { public: virtual void tick (double delta_time) = 0; // (methods to add and remove entities) }; typedef std::vector<Entity*> EntitiesVector; /// Specialized system interface to allow query the scene class SceneManager: public System { public: virtual EntitiesVector& cull () = 0; /// Sets the entity to be used as the camera and replaces previous ones. virtual void setCamera (Entity* entity) = 0; }; class SceneRenderer // Not a system { vitual void render (EntitiesVector& entities) = 0; }; Also I could not guess how to convert renderers to systems. My game separates logic updates from screen updates, my main class have a tick() method and a render() method that may not be called the same times. In my first attempt renderers were systems but they was saved in a separated manager, updated only in render() and not in tick() like all other systems. I realized that was silly and simply created a SceneRenderer interface and give up about converting them to systems, but that may be for another question. Then... something does not feel right, isn't it? If I understood correctly a system should not depend on another or even count with another system exposing an specific interface. Each system should care only about its entities, or nodes (as optimization, so they have direct references to relevant components without having to constantly call the component() or getComponent() method of the entity).

    Read the article

  • Web Form Testing [closed]

    - by Frank G.
    I created a application for a client that is along the lines of a ticket tracking system. I wanted to know if anyone know of software that could beta test the web forms. Well I am looking for something that could automatically populate/fill whatever forms are on the web page with generic data. The purpose of this is to just randomly populate data and see if I get any errors on the page when submitted plus to also see how validation for the form functions. Does anyone know of anything that could do this?

    Read the article

  • FormatException with IsolatedStorageSettings

    - by Jurgen Camilleri
    I have a problem when serializing a Dictionary<string,Person> to IsolatedStorageSettings. I'm doing the following: public Dictionary<string, Person> Names = new Dictionary<string, Person>(); if (!IsolatedStorageSettings.ApplicationSettings.Contains("Names")) { //Add to dictionary Names.Add("key", new Person(false, new System.Device.Location.GeoCoordinate(0, 0), new List<GeoCoordinate>() { new GeoCoordinate(35.8974, 14.5099), new GeoCoordinate(35.8974, 14.5099), new GeoCoordinate(35.8973, 14.5100), new GeoCoordinate(35.8973, 14.5099) })); //Serialize dictionary to IsolatedStorage IsolatedStorageSettings.ApplicationSettings.Add("Names", Names); IsolatedStorageSettings.ApplicationSettings.Save(); } Here is my Person class: [DataContract] public class Person { [DataMember] public bool Unlocked { get; set; } [DataMember] public GeoCoordinate Location { get; set; } [DataMember] public List<GeoCoordinate> Bounds { get; set; } public Person(bool unlocked, GeoCoordinate location, List<GeoCoordinate> bounds) { this.Unlocked = unlocked; this.Location = location; this.Bounds = bounds; } } The code works the first time, however on the second run I get a System.FormatException at the if condition. Any help would be highly appreciated thanks. P.S.: I tried an IsolatedStorageSettings.ApplicationSettings.Clear() but the call to Clear also gives a FormatException. I have found something new...the exception occurs twenty-five times, or at least that's how many times it shows up in the Output window. However after that, the data is deserialized perfectly. Should I be worried about the exceptions if they do not stop the execution of the program? EDIT: Here's the call stack when the exception occurs: mscorlib.dll!double.Parse(string s, System.Globalization.NumberStyles style, System.IFormatProvider provider) + 0x17 bytes System.Xml.dll!System.Xml.XmlConvert.ToDouble(string s) + 0x4b bytes System.Xml.dll!System.Xml.XmlReader.ReadContentAsDouble() + 0x1f bytes System.Runtime.Serialization.dll!System.Xml.XmlDictionaryReader.XmlWrappedReader.ReadContentAsDouble() + 0xb bytes System.Runtime.Serialization.dll!System.Xml.XmlDictionaryReader.ReadElementContentAsDouble() + 0x35 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlReaderDelegator.ReadElementContentAsDouble() + 0x19 bytes mscorlib.dll!System.Reflection.RuntimeMethodInfo.InternalInvoke(System.Reflection.RuntimeMethodInfo rtmi, object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, object parameters, System.Globalization.CultureInfo culture, bool isBinderDefault, System.Reflection.Assembly caller, bool verifyAccess, ref System.Threading.StackCrawlMark stackMark) mscorlib.dll!System.Reflection.RuntimeMethodInfo.InternalInvoke(object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, object[] parameters, System.Globalization.CultureInfo culture, ref System.Threading.StackCrawlMark stackMark) + 0x168 bytes mscorlib.dll!System.Reflection.MethodBase.Invoke(object obj, object[] parameters) + 0xa bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadValue(System.Type type, string name, string ns, System.Runtime.Serialization.XmlObjectSerializerReadContext context, System.Runtime.Serialization.XmlReaderDelegator xmlReader) + 0x138 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadMemberAtMemberIndex(System.Runtime.Serialization.ClassDataContract classContract, ref object objectLocal, System.Runtime.Serialization.DeserializedObject desObj) + 0xc4 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.ReadClass(System.Runtime.Serialization.DeserializedObject desObj, System.Runtime.Serialization.ClassDataContract classContract, int membersRead) + 0xf3 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.Deserialize(System.Runtime.Serialization.XmlObjectSerializerReadContext context) + 0x36 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlFormatReader.InitializeCallStack(System.Runtime.Serialization.DataContract clContract, System.Runtime.Serialization.XmlReaderDelegator xmlReaderDelegator, System.Runtime.Serialization.XmlObjectSerializerReadContext xmlObjContext, System.Xml.XmlDictionaryString[] memberNamesColl, System.Xml.XmlDictionaryString[] memberNamespacesColl) + 0x77 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.CollectionDataContract.ReadXmlValue(System.Runtime.Serialization.XmlReaderDelegator xmlReader, System.Runtime.Serialization.XmlObjectSerializerReadContext context) + 0x5d bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.ReadDataContractValue(System.Runtime.Serialization.DataContract dataContract, System.Runtime.Serialization.XmlReaderDelegator reader) + 0x3 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(System.Runtime.Serialization.XmlReaderDelegator reader, string name, string ns, ref System.Runtime.Serialization.DataContract dataContract) + 0x10e bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializerReadContext.InternalDeserialize(System.Runtime.Serialization.XmlReaderDelegator xmlReader, System.Type declaredType, System.Runtime.Serialization.DataContract dataContract, string name, string ns) + 0xb bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.DataContractSerializer.InternalReadObject(System.Runtime.Serialization.XmlReaderDelegator xmlReader, bool verifyObjectName) + 0x124 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObjectHandleExceptions(System.Runtime.Serialization.XmlReaderDelegator reader, bool verifyObjectName) + 0xe bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObject(System.Xml.XmlDictionaryReader reader) + 0x7 bytes System.Runtime.Serialization.dll!System.Runtime.Serialization.XmlObjectSerializer.ReadObject(System.IO.Stream stream) + 0x17 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.Reload() + 0xa3 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.IsolatedStorageSettings(bool useSiteSettings) + 0x20 bytes System.Windows.dll!System.IO.IsolatedStorage.IsolatedStorageSettings.ApplicationSettings.get() + 0xd bytes

    Read the article

  • stress testing opencl/Ati GPU on 12.04

    - by lurscher
    What does people normally use to stress test their GPU on ubuntu 12.04? I tried installing Phoronix Benchmark suite ubuntu .deb but it tries to install freeglu3-dev and at the same time complains about $ phoronix-test-suite benchmark pts/opencl The following dependencies are needed and will be installed: freeglut3-dev This process may take several minutes. Reading package lists... Building dependency tree... Reading state information... freeglut3-dev is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 90 not upgraded. There are dependencies still missing from the system: - OpenGL Utility Kit / GLUT 1: Ignore missing dependencies and proceed with installation. 2: Skip installing the tests with missing dependencies. 3: Re-attempt to install the missing dependencies. 4: Quit the current Phoronix Test Suite process. Missing dependencies action: 4 so even if it is installing freeglu3, it still complains about missing GLUT. You can't win against GLUT it seems So, what does people use for this? i mean, really, because i have tried googling for an hour and it is not paying up Thanks!

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Testing sample code in python modules

    - by Andrew Walker
    I'm in the process of writing a python module that includes some samples. These samples aren't unit-tests, and they are too long and complex to be doctests. I'm interested in best practices for automatically checking that these samples run. My current project layout is pretty standard, except that there is an extra top level makefile that has build, install, unittest, coverage and profile targets, that delegate responsibility to setup.py and nose as required. projectname/ Makefile README setup.py samples/ foo-sample foobar-sample projectname/ __init__.py foo.py bar.py tests/ test-foo.py test-bar.py I've considered adding a sampletest module, or adding nose.tools.istest decorators to the entry-point functions of the samples, but for a small number of samples, these solutions sound a bit ugly. This question is similar to http://stackoverflow.com/questions/301365/automatically-unit-test-example-code, but I assume python best practices will differ from C#

    Read the article

  • What some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Learning about tests for junior programmers

    - by RHaguiuda
    I`m not sure if its okay to ask it on stackoverflow. Ive been reading a log about tests, unit tests, tests frameworks, mocks and so on, but as a junior programmer I dont know anything about tests, not even where to start! Can anyone explain to young programmers about tests, how they`re run, where and what to test, what is unit testing, integration testing, automated tests? How much to test? And more important: how much test is enough? I belive this would be very helpfull. If possible indicate a few books too about these subjects. Thanks

    Read the article

  • Testing complex entities

    - by Carlos
    I've got a C# form, with various controls on it. The form controls an ongoing process, and there are many, many aspects that need to be right for the program to run correctly. Each part can be unit tested (for instance, loading some coefficients, drawing some diagnostics) but I often run into problems that are best described with an example: "If I click here, then here, then change this, then re-open the form, then click here, it crashes or produces an error" I've tried my best to use common code organisational ideas (inheritance, DRY, separation of concerns) but there never seems to be a way to test every single path, and inevitably, a form with several controls will have a huge number of ways to execute. What can I read (preferably online) that addresses this kind of issue, and is there a (non-generic) term for it. This isn't a specific problem I'm having, but one that creeps up on me, especially with WinForms.

    Read the article

  • How to get started with testing(jMock)

    - by London
    Hello, I'm trying to learn how to write tests. I'm also learning Java, I was told I should learn/use/practice jMock, I've found some articles online that help to certain extend like : http://www.theserverside.com/news/1365050/Using-JMock-in-Test-Driven-Development http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock And most articles I found was about test driven development, write tests first then write code to make the test pass. I'm not looking for that at the moment, I'm trying to write tests for already existing code with jMock. The official documentation is vague to say the least and just too hard for me. Does anybody have better way to learn this. Good books/links/tutorials would help me a lot. thank you EDIT - more concrete question : http://jeantessier.com/SoftwareEngineering/Mocking.html#jMock - from this article Tried this to mock this simple class : import java.util.Map; public class Cache { private Map<Integer, String> underlyingStorage; public Cache(Map<Integer, String> underlyingStorage) { this.underlyingStorage = underlyingStorage; } public String get(int key) { return underlyingStorage.get(key); } public void add(int key, String value) { underlyingStorage.put(key, value); } public void remove(int key) { underlyingStorage.remove(key); } public int size() { return underlyingStorage.size(); } public void clear() { underlyingStorage.clear(); } } Here is how I tried to create a test/mock : public class CacheTest extends TestCase { private Mockery context; private Map mockMap; private Cache cache; @Override @Before public void setUp() { context = new Mockery() { { setImposteriser(ClassImposteriser.INSTANCE); } }; mockMap = context.mock(Map.class); cache = new Cache(mockMap); } public void testCache() { context.checking(new Expectations() {{ atLeast(1).of(mockMap).size(); will(returnValue(int.class)); }}); } } It passes the test and basically does nothing, what I wanted is to create a map and check its size, and you know work some variations try to get a grip on this. Understand better trough examples, what else could I test here or any other exercises would help me a lot. tnx

    Read the article

  • Simulating Ajax failures for QA testing

    - by womp
    Our first ASP.Net MVC/jQuery product is about to go to QA, and we're looking for a way for our QA guys to easily be able to simulate bad Ajax requests (without modifying the application code). A typical integration/UI test plan might be: Load page, click button "DoStuff" "DoStuff" fails Attempt button "DoStuff" again "DoStuff" succeeds Verify application state This is a simple test case - there will be cases with multiple failures and successes interspersed. Aside from "unplug your network cable" I'm looking for an easy way for our guys to simulate intermittent bad server responses. I'm open to any ideas so I won't go into too many details about our application setup or dependencies. How have you handled this?

    Read the article

  • Tools for website/web application load testing?

    - by zarko.susnjar
    Before going into production, our client demands actual numbers of how many users our web application can handle. We have all kinds of features implemented including asset management (file uploads/downloads), documents import/export, various statistics, web-services etc. Which tool (or set of tools) could do this? Application details: XHTML/jQuery Coldfusion 8 SQL Server 2008 Windows Server 2008

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • Integrating Nagios with a ticketing system/incident mnagement system

    - by sektor
    Is there a free ticketing system/incident management system which will help me in achieving the following? 1) If a service goes down then Nagios alerts the on-duty staff and pushes the status to some backend or DB as a ticket, say the initial status is "New". 2) The on-duty staff logs in through a frontend and acknowledges the new ticket by marking it as "In progress", so now the status of the ticket changes from "New" to "In progress". 3) If even after "n" number of minutes no person from on-duty staff has changed the ticket status to "In progress" then Nagios alerts the next level of contacts. Although if the on-duty staff has acknowledged the ticket then there is no need to alert the next level. 4) When the service comes up Nagios closes the ticket by marking it "Closed" Now I already have Nagios monitoring set up and currently it alerts by sending text messages and mails, what I'm looking for is some framework which only escalates the issue(alerts the second level) if the first level(on-duty staff) fails to respond to the initial alert. By "responding to the alert" I mean, the on-duty staff can login via some frontend and basically change the status to something like "Acknowledged" or "In progress".

    Read the article

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >