Search Results

Search found 10170 results on 407 pages for 'regression testing'.

Page 18/407 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Rails 3 functional optionally testing caching

    - by Stephan
    Generally, I want my functional tests to not perform action caching. Rails seems to be on my side, defaulting to config.action_controller.perform_caching = false in environment/test.rb. This leads to normal functional tests not testing the caching. So how do I test caching in Rails 3. The solutions proposed in this thread seem rather hacky or taylored towards Rails 2: How to enable page caching in a functional test in rails? I want to do something like: test "caching of index method" do with_caching do get :index assert_template 'index' get :index assert_template '' end end Maybe there is also a better way of testing that the cache was hit?

    Read the article

  • Robust unit-testing of HTML in PHP

    - by asbja
    I'm adding unit-tests to an older PHP codebase at work. I will be testing and then rewriting a lot of HTML generation code and currently I'm just testing if the generated strings are identical to the expected string, like so: (using PHPUnit) public function testConntype_select() { $this->assertEquals( '<select><option value="blabla">Some text</option></select>', conntype_select(1); // A value from the test dataset. ); } This way has the downside that attribute ordering, whitespace and a lot of other irrelevant details are tested as well. I'm wondering if there are any better ways to do this. For example if there are any good and easy ways to compare the generated DOM trees. I found very similar questions for ruby, but couldn't find anything for PHP.

    Read the article

  • Browser for cross-site-script testing (for testing Mozilla Add-On)

    - by Anthony
    I am working on a Firefox extension that will involve ajax calls to domains that would normally fail due to the same-origin policy set by Firefox (and most modern browsers). I was wondering if there is a way to either turn off the same-origin restriction (in about:config, perhaps) or if there was a standard lite-browser that developers turn to for this. I really would like to avoid using any blackhat tools, if possible. Not because I'm against them, I just don't want to add another learning curve to the process. I can use curl in PHP to confirm that the requests work, but I want to get started on writing the js that the addon will actually use, so I need a client that will execute js. I also tried spidermonkey, but since I'm doing the ajax with jquery, it threw a fit at all of the browser-based default variables. So, short version: is there a reliable browser/client for cross site scripting that isn't primarily a hacker app? Or can I just turn off same-domain policy in Firefox?

    Read the article

  • Free Online tools for testing mysql queries - having features like PhPMyAdmin

    - by Sandeepan Nath
    Can anybody tell me any "free online hosted database servers" (don't know if that is the correct term) where we can do query testing and all those things that we can do on tools like PhpMyAdmin? does something like that exist? Like we have http://jsfiddle.net/ for testing js codes. I checked http://sqlzoo.net/ but this is very much limited and there are some predefined tables in some given examples on which we can alter things. That's all. It does not allow me to do a lots of other things like creating tables etc. A nice resource - PhpMyAdmin's demo site . Use the latest stable version here. You have full control over the MySQL server, however you should not change root, debian-sys-maint or pma user password or limit their permissions. If you do so, demo can not be accessed until privileges are restored, so you just break things for you and other users.

    Read the article

  • Unit testing "hybrid" WPF/Silverlight controls

    - by Alan Mendelevich
    I'm starting a new WPF/Silverlight custom control project and wanted to do unit testing on this one. However I'm a little confused about how to approach this. This control would be based on the same codebase for both WPF and Silverlight with minor forking using #ifs and partial classes to tame the differences. I guess I could write unit tests for WPF part with NUnit, MSTest, xUnit, etc. and for the Silverlight part with Silverlight Unit Test Framework but this doesn't sound very elegant to me. I'd have to either ignore testing identical code on one of the platforms and test only differing parts (which is not very trustworthy) or rewrite tests for 2 frameworks (which is annoying). Is this the right way to go? I'm wondering if there's some guidance, articles, tutorials out there on how to approach this task. Any pointers?

    Read the article

  • Looking for *small*, open source, c# project with extensive Unit Testing

    - by Gern Blandston
    (I asked this question but did not receive much response. It was recommended that I ask the same question with regards to C#. ) I am a VB.NET developer with little C# experience (yes, I know I need to write more in C#), looking for small open source projects that demonstrate high unit testing coverage from which to learn. I'm looking for small projects because I don't want to have to wade through a ton of code to get a better understanding of how to apply unit testing in my own situation, in which I write mostly IT business apps used internally by my company. UPDATE: Original question that got me asking about this is here

    Read the article

  • strerror_r returns trash when I manually set errno during testing

    - by Robert S. Barnes
    During testing I have a mock object which sets errno = ETIMEDOUT; The object I'm testing sees the error and calls strerror_r to get back an error string: if (ret) { if (ret == EAI_SYSTEM) { char err[128]; strerror_r(errno, err, 128); err_string.assign(err); } else { err_string.assign(gai_strerror(ret)); } return ret; } I don't understand why strerror_r is returning trash. I even tried calling strerror_r(ETIMEDOUT, err, 128) directly and still got trash. I must be missing something. It seems I'm getting the gnu version of the function not the posix one, but that shouldn't make any difference in this case.

    Read the article

  • How to plan for whitebox testing

    - by Draco
    I'm relatively new to the world of WhiteBox Testing and need help designing a test plan for 1 of the projects that i'm currently working on. At the moment i'm just scouting around looking for testable pieces of code and then writing some unit tests for that. I somehow feel that is by far not the way it should be done. Please could you give me advice as to how best prepare myself for testing this project? Any tools or test plan templates that I could use? THe language being used is C++ if it'll make difference.

    Read the article

  • Which testing method to go with? [Rails]

    - by yuval
    I am starting a new project for a client today. I have done some rails projects before but never bothered writing tests for them. I'd like to change that starting with this new project. I am aware there are several testing tools, but am a bit confused as to which I should be using. I heard of RSpec, Mocha, Webrat, and Cucamber. Please keep in mind I never really wrote any regular tests, so my knowledge of testing in general is quite limited. How would you suggest I get started? Thanks!

    Read the article

  • Ideas on simulating webservices for local automated testing.

    - by novice123
    I am testing an app, which talks to different webservices over the internet. For my automated testing, I don't want to go over the network. To achieve this, I need to simulate the webservice on my machine using another app. My initial thought is to record all the requests and responses between client and webservice, and then just write a simulation app which replays these responses. The disadvantage here is that everytime the webservice protocol changes a bit, I have to modify all my recorded resposnes. so I am looking to see if there are more elegant solutions. have anyone solved a similar problem? any thoughts, suggestion are appreciated.

    Read the article

  • Free Testing / Code Coverage systems for C++

    - by Billy ONeal
    I'd like to start using a Test Driven Development system for a private project since I saw my employer using it and realized it was very useful. My employer's project was in C# but mines are in C and C++. I looked around and saw that several packages exist for both Java and .NET (for example: NCover, NUnit, ...). Unfortunately I found it difficult to find good C++ testing frameworks. Do you know of any unit testing frameworks that satisfy the following requirements? IMPORTANT: Must provide code coverage statistics, as I'd like to have some idea of how well my tests cover my code-base. Must be free Usable with C++ projects EDIT: To be clear, I know of many existing unit test frameworks. The code coverage piece is what's most important.

    Read the article

  • Unit testing a SQL code generator

    - by Tom H.
    The team I'm on is currently writing code in TSQL to generate TSQL code that will be saved as scripts and later run. We're having a little difficulty in separating our unit tests between testing the code generator parts and testing the actual code that they generate. I've read through another similar question, but I was hoping to get some specific examples of what kind of unit test cases we might have. As an example, let's say that I have a bit of code that simply generates a DROP statement for a view, given the view schema and name. Do I just test that the generated code matches some expected outcome using string comparisons and then in a later integration or system test make sure that the drop actually drops the view if it exists, does nothing if the view doesn't exist, or raises an error if the view is one that we are marking as not allowing a drop? Thanks for any advice!

    Read the article

  • WatiN Testing and Connection Strings Fiasco

    - by azamsharp
    I have a separate project which performs watiN tests. The project is in the form of class library project. When I run test it launches the browser and then uses the Web.config of the Web Application Project which I am testing. The Web.config of web application project has the Dev connection string which should not be used for testing. What are different ways that I can take and tell my WatiN to use the App.config that is inside the WatiN project and not the Web application project? Here are couple of options that I have: 1) Replace the connection string at runtime. 2) Replace the connection string at pre-build event or something.

    Read the article

  • Stress test a server for simultaneous connection

    - by weston smith
    I am trying to figure out a practical way to stress test a server for 300 to 600 simultaneous connections. Any advice? Thank you everyone for the help. To be more specific (sorry I wasn't before) this is a Flash Media Server on AWS that will be streaming live video. I've been having problems with the video freezing/buffering for everyone and I need to verify if its on the user end, upload end, or server end. I mainly need help with stress testing the server with 300-600 multiple request before going live.

    Read the article

  • Function testing on Netbeans 6.8

    - by ron
    While not a torrent, but some articles can be found on the net about function testing (particularly http://blogs.sun.com/geertjan/entry/gui_testing_on_the_netbeans). However the tools mentioned by them do not seem to be maintained, or don't have a plugin working with the most recent version of Netbeans (6.8). Do you have any function test setup for GUI? What is your level of integration into the development process (IDE integration, ant, etc). Additional candy is that Netbeans is not only the IDE, but the GUI app is also developed for Netbeans 6.8 Platform (so I'm mainly interested in GUI testing NB-platform apps, but tips for any Swing apps in general would be a help too).

    Read the article

  • Unit testing the app.config file with NUnit

    - by Dana
    When you guys are unit testing an application that relies on values from an app.config file? How do you test that those values are read in correctly and how your program reacts to incorrect values entered into a config file? It would be ridiculous to have to modify the config file for the NUnit app, but I can't read in the values from the app.config I want to test. Edit: I think I should clarify perhaps. I'm not worried about the ConfigurationManager failing to read the values, but I am concerned with testing how my program reacts to the values read in.

    Read the article

  • python unit testing os.remove fails file system

    - by hwjp
    Am doing a bit of unit testing on a function which attempts to open a new file, but should fail if the file already exists. when the function runs sucessfully, the new file is created, so i want to delete it after every test run, but it doesn't seem to be working: class MyObject_Initialisation(unittest.TestCase): def setUp(self): if os.path.exists(TEMPORARY_FILE_NAME): try: os.remove(TEMPORARY_FILE_NAME) except WindowsError: #TODO: can't figure out how to fix this... #time.sleep(3) #self.setUp() #this just loops forever pass def tearDown(self): self.setUp() any thoughts? The Windows Error thrown seems to suggest the file is in use... could it be that the tests are run in parallel threads? I've read elsewhere that it's 'bad practice' to use the filesystem in unit testing, but really? Surely there's a way around this that doesn't invole dummying the filesystem?

    Read the article

  • Creating TCP network errors for unit testing

    - by Robert S. Barnes
    I'd like to create various network errors during testing. I'm using the Berkely sockets API directly in C++ on Linux. I'm running a mock server in another thread from within Boost.Test which listens on localhost. For instance, I'd like to create a timeout during connect. So far I've tried not calling accept in my mock server and setting the backlog to 1, then making multiple connections, but all seem to successfully connect. I would think that if there wasn't room in the backlog queue I would at least get a connection refused error if not a timeout. I'd like to do this all programatically if possible, but I'd consider using something external like IPchains to intentionally drop certain packets to certain ports during testing, but I'd need to automate creating and removing rules so I could do it from within my Boost.Test unit tests. I suppose I could mock the various system calls involved, but I'd rather go through a real TCP stack if possible. Ideas?

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >