Search Results

Search found 10170 results on 407 pages for 'regression testing'.

Page 34/407 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • How can I effectively test a scripting engine?

    - by ChaosPandion
    I have been working on an ECMAScript implementation and I am currently working on polishing up the project. As a part of this, I have been writing tests like the following: [TestMethod] public void ArrayReduceTest() { var engine = new Engine(); var request = new ExecScriptRequest(@" var a = [1, 2, 3, 4, 5]; a.reduce(function(p, c, i, o) { return p + c; }); "); var response = (ExecScriptResponse)engine.PostWithReply(request); Assert.AreEqual((double)response.Data, 15D); } The problem is that there are so many points of failure in this test and similar tests that it almost doesn't seem worth it. It almost seems like my effort would be better spent reducing coupling between modules. To write a true unit test I would have to assume something like this: [TestMethod] public void CommentTest() { const string toParse = "/*First Line\r\nSecond Line*/"; var analyzer = new LexicalAnalyzer(toParse); { Assert.IsInstanceOfType(analyzer.Next(), typeof(MultiLineComment)); Assert.AreEqual(analyzer.Current.Value, "First Line\r\nSecond Line"); } } Doing this would require me to write thousands of tests which once again does not seem worth it.

    Read the article

  • How do you do HTML form testing without real user input simulation ?

    - by justjoe
    this question is like this one, except it's for PHP testing via browser. It's about testing your form input. Right now, i have a form on a single page. It has 12 input boxes. Every time i test the form, i have write those 12 input boxes in my browser. i know it's not a specific coding question. This question is more about how to do direct testing on your form So, how to do recursive testing without consuming too much of your time ?

    Read the article

  • How do I give each test its own TestResults folder?

    - by izb
    I have a set of unit tests, each with a bunch of methods, each of which produces output in the TestResults folder. At the moment, all the test files are jumbled up in this folder, but I'd like to bring some order to the chaos. Ideally, I'd like to have a folder for each test method. I know I can go round adding code to each test to make it produce output in a subfolder instead, but I was wondering if there was a way to control the output folder location with the Visual Studio unit test framework, perhaps using an initialization method on each test class so that any new tests added automatically get their own output folder without needing copy/pasted boilerplate code?

    Read the article

  • loading fixtures for django tests

    - by alexarsh
    Hi, I want to use some fixtures in my tests. I have cms_sample app and a fixtures folder inside with cms_sample_data.xml I use the following in my test.py: class Funtionality(TestCase): fixtures = ['cms_sample_data'] I do use TestCase of django.tests and not unittest. But the fixtures are not loaded. What am I missing? Thanks, Arshavski Alexander.

    Read the article

  • How to extend WPF hit testing zone for a Path object.

    - by user275587
    Wpf hit testing is pretty good but the only method I found to extend the hit zone is to put a transparent padding area around your object. I can't find any method to add a transparent area arround a Path object. The path is very thin and I would like to enable hit testing if the user clicks near the path. I can't find any method to extend the path object with a transparent area like the image below : I tried to used a partially transparent stroke brush but I ran into the problem described here : http://stackoverflow.com/questions/1412833/how-can-i-draw-a-soft-line-in-wpf-presumably-using-a-lineargradientbrush I also tried to put an adorner over my line but because of WPF anti-aliasing algorithms, the position is way off when I zoom in my canvas and interfere with other objects hit-testing in a bad way. Any suggestion to extend the hit testing zone would be highly appreciated. Thanks, Kumar

    Read the article

  • Splitting a test to a set of smaller tests

    - by mkorpela
    I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way. I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test. An example of what I am aiming at: //Class under test class A { public void setB(B b){ this.b = b; } public Output process(Input i){ return b.process(doMyProcessing(i)); } private InputFromA doMyProcessing(Input i){ .. } .. } //Another class under test class B { public Output process(InputFromA i){ .. } .. } //The Big Test @Test public void theBigTest(){ A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive Input i = createInput(); Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive assertEquals(o, expectedOutput()); } //The splitted tests @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest1(){ // this method is a bit too long but its just an example.. Input i = createInput(); InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B b = mock(B.class); when(b.process(x)).thenReturn(expected); A classUnderTest = createInstanceOfClassA(); classUnderTest.setB(b); Output o = classUnderTest.process(i); assertEquals(o, expected); verify(b).process(x); verifyNoMoreInteractions(b); } @PartlyDefines("theBigTest") // <-- so something like this should come from the tool.. @Test public void smallerTest2(){ InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow Output expected = expectedOutput(); // this should be the same in both tests and it should be ensured somehow B classUnderTest = createInstanceOfClassB(); Output o = classUnderTest.process(x); assertEquals(o, expected); }

    Read the article

  • Resources for Test Driven Development in Web Applications?

    - by HorusKol
    I would like to try and implement some TDD in our web applications to reduce regressions and improve release quality, but I'm not convinced at how well automated testing can perform with something as fluffy as web applications. I've read about and tried TDD and unit testing, but the examples are 'solid' and rather simple functionalities like currency converters, and so on. Are there any resources that can help with unit testing content management and publication systems? How about unit testing a shopping cart/store (physical and online products)? AJAX? Googling for "Web Test Driven Development" just gets me old articles from several years ago either covering the same examples of calculator-like function or discussions about why TDD is better than anything (without any examples).

    Read the article

  • Testing with Profiler Custom Events and Database Snapshots

    We've all had them. One of those stored procedures that is huge and contains complex business logic which may or may not be executed. These procedures make it an absolute nightmare when it comes to debugging problems because they're so complex and have so many logic offshoots that it's very easy to get lost when you're trying to determine the path that the procedure code took when it ran. Fortunately Profiler lets you define custom events that you can raise in your code and capture in a trace so you get a better window into the sub events occurring in your code. I found it very useful to use custom events and a database snapshot to debug some code recently and we'll explore both in this article. I find raising these events and running Profiler to be very useful for testing my stored procedures on my own as well as when my code is going through official testing and user acceptance. It's a simple approach and a great way to catch any performance problems or logic errors.

    Read the article

  • Django's self.client.login(...) does not work in unit tests

    - by thebossman
    I have created users for my unit tests in two ways: 1) Create a fixture for "auth.user" that looks roughly like this: { "pk": 1, "model": "auth.user", "fields": { "username": "homer", "is_active": 1, "password": "sha1$72cd3$4935449e2cd7efb8b3723fb9958fe3bb100a30f2", ... } } I've left out the seemingly unimportant parts. 2) Use 'create_user' in the setUp function (although I'd rather keep everything in my fixtures class): def setUp(self): User.objects.create_user('homer', '[email protected]', 'simpson') Note that the password is simpson in both cases. I've verified that this info is correctly being loaded into the test database time and time again. I can grab the User object using User.objects.get. I can verify the password is correct using 'check_password.' The user is active. Yet, invariably, self.client.login(username='homer', password='simpson') FAILS. I'm baffled as to why. I think I've read every single Internet discussion pertaining to this. Can anybody help? The login code in my unit test looks like this: login = self.client.login(username='homer', password='simpson') self.assertTrue(login) Thanks.

    Read the article

  • SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition

    - by Greg Mally
    Overview Many web service developers use soapUI for various tests like: smoke test, unit test, and load testing because you can get a free edition that is fairly robust. However, if you need to venture into more complex testing that requires a dynamic payload, then the free edition doesn't necessarily make it easy. This feature does exist in soapUI, but for obvious reasons it is in the Pro version. In this blog I will show you how to use soapUI free edition for dynamic payloads in a simplified example. Hopefully this will open the doors for you to expand into more complex scenarios. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/. Additionally, we will be using asynchronous web services and you can review the setup for this in my blog: SOA Suite 11g Asynchronous Testing with soapUI. Features in soapUI Free Edition Relating to this Topic The soapUI test tool provides a very feature rich environment that can do many things provided you are willing to go beyond point and click. For this example, we will be leveraging just a couple features for our dynamic payload example: Test Case Properties Scripting with Groovy Basically, we will be using a property as a global variable and we will manipulate that property using a Groovy script. Setting Up Our Property Properties are available throughout soapUI and here is a snippet from the soapUI website defining the locations: Projects : for handling Project scope values, for example a subscription ID TestSuite : for handling TestSuite scoped values, can be seen as "arguments" to a TestSuite TestCases : for handling TestCase scoped values, can be seen as "arguments" to a TestCase Properties TestStep : for providing local values/state within a TestCase Local TestStep properties : several TestStep types maintain their own list of properties specific to their functionality : DataSource, DataSink, Run TestCase MockServices : for handling MockService scoped values/arguments MockResponses : for handling MockResponse scoped values Global Properties : for handling Global properties, optionally from an external source For our example, we will be defining a custom property in a TestCase called SimpleAsyncPayload. The property can be created in either the Custom Properties tab located at the bottom of the Navigator panel when the TestCase is selected in the Navigator or the Properties label in the TestCase editor: Navigator Panel TestCase Editor You will notice that I set a value of “0” for the custom property. For this simplified example, we will need to retrieve that value and manipulate it prior to making the web service request invocation. In order to accomplish this, we will need to get Groovy ;) Let's Get Groovy We will now add a new Groovy Script step to the TestCase called Manipulate Payload: TestCase Editor > Append Step > Groovy Script Once we have added the Groovy Script step to our TestCase, we can open the Groovy Script editor to add the code to: Get the current value of the property we created called SimpleAsyncPayload. Convert the value of the property to an integer. Increment the value. Store the incremented value back into the TestCase property called SimpleAsyncPayload. The script should look something like the following: Groovy Script Editor – Manipulate Payload At this point we can test the script to see if it is working by simply running the TestCase (left-click on the green triangle in the upper left-hand corner of the TestCase editor). To verify if it ran correctly, we can look at the value of the SimpleAsyncPayload property which should now be 1: TestCase Editor – Run Results All that is left to complete the TestCase is to append another step of type Test Request. The information required to append the request is a name and an operation to invoke. In this example we will use the default name and select the SimpleAsyncBPELProcessBingd -> process as the operation (any other information being requested, simply use the defaults unless you are calling an asynchronous operation then do not add any assertions). We are now in familiar ground with the Test Request editor. Depending upon the type of operation you are invoking (synchronous or asynchronous), please update the request with the necessary information (e.g., callback information for asynchronous operations). We will now tweak the Test Request payload to retrieve the value of the SimpleAsyncPayload property. The soapUI editor makes this very simple: right-click in the payload and navigate to the property (e.g., right-click > Get Data.. > TestCase: [Groovy TestCase] > Property [SimpleAsyncPayload]): Test Request Editor – Insert Property Value Your payload should now look something like the following: Test Request Editor – Inserted Property Value Just like before, we are now ready to run the TestCase. If everything goes as expected we should see a response like the following: Message Viewer – Results of TestCase Run We are now setup to be able to run a stress test where the payload will change for each request. This simple example can be expanded to include multiple payload values, complex calculations in the scripts, or whatever can be done via the soapUI scripting. Hopefully you have found this useful and happy testing to you :)

    Read the article

  • How to setup split test?

    - by John Isaacks
    I want to create a way to test different layouts on a page to see which get more conversions. For example. If I have 2 versions of a page and I send 50% to page A and 50% to page B and see which one converts more sales. So I am thinking maybe use .htaccess to rewrite half to page A and the other half to page B. But how can I do that with .htaccess is there a way? do I need to use PHP instead to do this? Also if there is a better way to do this, or any cautions I should be aware of, please let me know.

    Read the article

  • How do I get developers to treat test code as "real" code?

    - by womp
    In the last two companies I've been at, there is an overriding mentality among developers that it's okay to write unit tests in a throw-away style. Code that they would never write in the actual product suddenly becomes OK in the unit tests. I'm talking Rampant copying and pasting between tests Code styling rules not followed Hard-coded magic strings across tests No object-oriented thought or design for integration tests, mocks or helper objects (250 line single-function tests!) .. and so on. I'm highly dissatisfied with the quality of the test code. Generally we do not do code reviews on our test assemblies, and we also do not enforce style or code analysis of them on our build server. Is that the only way to overcome this inertia about test quality? I'm looking for ideas to take to our developers, without having to go to higher management saying that we need to use resources for enforcement of test quality (although I will if I have to). Any thoughts or similar experiences?

    Read the article

  • Platform for DS/Gameboy Dev - Managed Memory, Tools, and Unit Testing

    - by ashes999
    I'm interested in dabbling in Nintendo DS, 3DS, or GBA development. I would like to know what my (legal) options for development tools and IDEs are. In particular, I would not consider moving in this direction unless I can find: A programming language that has managed memory (garbage collection) A unit testing tool akin to JUnit, NUnit, etc. for unit tests I would also prefer if other tools exist, like code-coverage, etc. for that platform. But the main thing is managed memory and unit testing. What options are out there?

    Read the article

  • Does someone have used Network Emulator API exposed in VS 2010

    - by Pritam
    Hi, I have seen VS2010 exposing Network Emulator API. I have installed it and trying to use this API, but not able detect whether it is really running with this code or not. Sometime I have given wrong profile name but it does not throw any error. Please find below my piece of code. If some one have used it please help me. IntPtr m_emulatorHandle = IntPtr.Zero; NetworkEmulationApi.LoadProfile(m_emulatorHandle, "300KB_WithLatency.xml"); NetworkEmulationApi.StartEmulation(m_emulatorHandle); Thanks, Pritam

    Read the article

  • uninitialized constant Test::Unit::TestResult::TestResultFailureSupport

    - by Vitaly Kushner
    I get the error in subj when I'm trying to run specs or generators in a fresh rails project. This happens when I add shoulda to the mix. I added the following in the config/environment.rb: config.gem 'rspec', :version => '1.2.6', :lib => false config.gem 'rspec-rails', :version => '1.2.6', :lib => false config.gem "thoughtbot-shoulda", :version => "2.10.2", :lib => 'shoulda', :source => "http://gems.github.com" I'm on OSX. ruby 1.8.6 (2008-08-11 patchlevel 287) gems 1.3.5 rails 2.3.4 rspec - 1.2.6 shoulda - 2.10.2 test-unit - 2.0.3 I'm aware of this and adding config.gem 'test-unit', :lib => 'test/unit' indeed solves the genrator problem as it doesn't throw an exception, but it prints 0 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications at the end of the run so I suppose it tries to run tests which is unexpected and undesired, also the specs stop to run at all, seems like rspec is not running at all, when running rake spec I get the test-unit output again (with 0 tests as there are only specs, no tests defined)

    Read the article

  • Creating method templates in Eclipse

    - by stevebot
    Is there any way to do the following in eclipse? Have eclipse template a method like the following public void test(){ // CREATE MOCKS // CREATE EXPECTATIONS // REPLAY MOCKS // VERIFY MOCKS } so then I could presumably just use intellisense and select an option like "createtest" and have it stub out a method with the comments similar to the above?My problem is that often myself and other developers I know forgot all the steps we need to follow to do what we dub as a valid unit test for our application. If I could template our test methods to stub out the comments above it would be a big help.

    Read the article

  • Any way to separate unit tests from integration tests in VS2008?

    - by AngryHacker
    I have a project full of tests, unit and integration alike. Integration tests require that a pretty large database be present, so it's difficult to make it a part of the build process simply because of the time that it takes to re-initialize the database. Is there a way to somehow separate unit tests from integration tests and have the build server just run the unit tests? I see that there is an Ordered Unit test in VS2008, which allows you to pick and choose tests, but I can't make it just execute alone, without all the others. Is there a trick that I am missing? Or perhaps I could adorn the unit tests with an attribute? What are some of the approaches people are using? P.S. I know I could use mocking for integration tests (just to make them go faster) but then it wouldn't be a true integration test.

    Read the article

  • how to test or describe endless possibilities?

    - by koen
    Example class in pseudocode: class SumCalculator method calculate(int1, int2) returns int What is a good way to test this? In other words how should I describe the behavior I need? test1: canDetermineSumOfTwoIntegers or test2: returnsSumOfTwoIntegers or test3: knowsFivePlusThreeIsEight Test1 and Test2 seem vague and it would need to test a specific calculation, so it doesn't really describe what is being tested. Yet test3 is very limited. What is a good way to test such classes?

    Read the article

  • Which unit test framework for c++ based games?

    - by jmp97
    Which combination of testing tools do you feel is best? Given the framework / library of your choice you might consider: suitability for TDD ease of use / productivity dealing with mock objects setup with continuous integration error reporting Note: While this is potentially a generic question like the one on SO I would argue that game development is usually bound to a specific work flow which influences the choice for testing. For a higher-level perspective, see question Automated testing of games.

    Read the article

  • API sanity autotest help needed

    - by rmk
    I am trying to auto-generate Unit Tests for my C code using API sanity autotest. But, the problem is that it is somewhat complex to use, and some tutorials / howto / other resources on how to use it would be really helpful. Have you had any luck with API sanity autotest? Do you think there's a better tool that can be used to auto-generate unit tests for C code?

    Read the article

  • Recording object method calls to generate automated test.

    - by Constantin
    I have an object with large interface and I would like to record all its method calls done during a user session. Ideally this sequence of calls would be available as source code: myobj.MethodA(42); myobj.MethodB("spam", false); ... I would then convert this code to a test case to have a kind of automated smoke/load test. WCF Load Test can do this for WCF services and CodedUI test recorder can do this for UIs. What are my options for a POCO class? I am in position to edit application code and replace the object in question with some recording/forwarding proxy.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >