Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 45/404 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Mod_Rewrite: Testing URL got indexed in Google - How do I create a proper 301 redirect?

    - by Jonathan Wold
    I worked on a website for which I had a "development URL" that looked something like this: www.domainname.com.php5-9.dfw1-2.websitetestlink.com/ Now, several weeks after the website launch, there is at least one page of content indexed on Google with that URL. Question: How do I redirect all requests from that test URL to reroute to the actual domain? So, for instance, I would want: www.domainname.com.php5-9.dfw1-2.websitetestlink.com/page-name To go to: www.domainname.com/page-name The website is powered by WordPress and hosted on a PHP server. I've experimented with .htaccess without much success.

    Read the article

  • How do I add a VSTO project as a reference to a unit testing project?

    - by Mathias
    In order not to pollute my projects with unit tests, I like to create a separate project for my unit tests; I add a reference to the project under test in the unit tests project. However, this isn't working that well with my VSTO excel add-in projects: when I create a separate unit test project and go to Add Reference Projects, there is no project to pick. What I have done so far is Add Reference Browse, and pick the add-in dll from the debug folder. I have also run into issues from time to time with this, with the reference suddenly not working, requiring to remove/re-add the dll reference. Can anybody explain why a VSTO project doesn't show up as a regular project? And is there a better way to go about it than what I am doing presently?

    Read the article

  • BDD-testing using a UI driver (e.g. Selenium for a web-application)

    - by jonathanconway
    Can BDD (Behavior Driven Design) tests be implemented using a UI driver? For example, given a web application, instead of: Writing tests for the back-end, and then more tests in Javascript for the front-end Should I: Write the tests as Selenium macros, which simulate mouse-clicks, etc in the actual browser? The advantages I see in doing it this way are: The tests are written in one language, rather than several They're focussed on the UI, which gets developers thinking outside-in They run in the real execution environment (the browser), which allows us to Test different browsers Test different servers Get insight into real-world performance Thoughts?

    Read the article

  • Grails unit testing domain classes with Set properties - is this safe?

    - by Ali G
    I've created a domain class in Grails like this: class MyObject { static hasMany = [tags: String] // Have to declare this here, as nullable constraint does not seem to be honoured Set tags = new HashSet() static constraints = { tags(nullable: false) } } Writing unit tests to check the size and content of the MyObject.tags property, I found I had to do the following: assertLength(x, myObject.tags as Object[]) assertEquals(new HashSet([...]), myObject.tags) To make the syntax nicer for writing the tests, I implemented the following methods: void assertEquals(List expected, Set actual) { assertEquals(new HashSet(expected), actual) } void assertLength(int expected, Set set) { assertLength(expected, set as Object[]) } I can now call the assertLength() and assertEquals() methods directly on an instance of Set, e.g. assertLength(x, myObject.tags) assertEquals([...], myObject.tags) I'm new to Groovy and Grails, so unaware how dangerous method overloading like this is. Is it safe? If so, I'm slightly* surprised that these methods (or similar) aren't already available - please let me know if they are. * I can see how these methods could also introduce ambiguity if people weren't expecting them. E.g. assertLength(1, set) always passes, no matter what the content of set

    Read the article

  • Xcode Unit Testing - Accessing Resources from the application's bundle?

    - by Ben Scheirman
    I'm running into an issue and I wanted to confirm that I'm doing things the correct way. I can test simple things with my SenTestingKit tests, and that works okay. I've set up a Unit Test Bundle and set it as a dependency on the main application target. It successfully runs all tests whenever I press cmd+B. Here's where I'm running into issues. I have some XML files that I need to load from the resources folder as part of the application. Being a good unit tester, I want to write unit tests around this to make sure that they are loading properly. So I have some code that looks like this: NSString *filePath = [[NSBundle mainBundle] pathForResource:@"foo" ofType:@"xml"]; This works when the application runs, but during a unit test, mainBundle points to the wrong bundle, so this line of code returns nil. So I changed it up to utilize a known class like this: NSString *filePath = [[NSBundle bundleForClass:[Config class]] pathForResource:@"foo" ofType:@"xml"]; This doesn't work either, because in order for the test to even compile code like this, it Config needs to be part of the Unit Test Target. If I add that, then the bundle for that class becomes the Unit Test bundle. (Ugh!) Am I approaching this the wrong way?

    Read the article

  • Best practices for QA / testing in an Agile (Scrum+XP) team?

    - by Srirangan
    Hey guys, We're getting a QA for the first time in our project. We're not sure how to best use him. We work in an Agile environment. Pair programming, user stories, short sprints (two weeks), daily stand-ups, retrospectives, planning meetings, quick releases etc. One obviously way to use a tester is to verify bugs fixes and user stories every sprint. Are there any better ways for an Agile team to utilize a tester. Thanks, Sri

    Read the article

  • What is the best way of testing XML responses?

    - by user303396
    I'm using selenium IDE to test my webpages but unfortunately I cannot use it to test those pages that return an xml response. Some people use Selenium Remote Control, others use modules like WWW::Mechanize and Test::XML or Test::XPath. What is the best way to test the XML responses?

    Read the article

  • why do you write tests and what is a unit test and how does it differ other types of testing ?

    - by dfafa
    im curious as to know, why tests are written? why would bother writing it ? why not just compile and run the code or view it in your browser, click around and test out stuff. i mean i can understand, a crawler that checks your web app's functions....but why is tests written, maintained and treated as important as the main feature codes ? is it crucial to always write and use tests ?

    Read the article

  • Code generation tool, to create C# adapter classes for unit testing?

    - by RyBolt
    I know I wouldn't need this with Typemock, however, with something like MoQ , I need to use the adapter pattern to enable the creation of mocks via interfaces for code I don't control. For example, TcpClient is a .NET class, so I use adapter pattern to enable mocking of this object, b/c I need an interface of that class. I then produce interface ITcpClient, that can then be implemented via a TcpClientAdapter class, which is just plain vanilla adapter pattern implementation. I am looking for a tool to do this automatically (creation of interface and adapter), I would think there is one out there somewhere? (or is everyone just hand coding these)

    Read the article

  • C# Unit Testing: How do I set a Lazy<T>.ValueCreated to false?

    - by michael paul
    Basically, I have a unit test that gets a singleton instance of a class. Some of my tests required me to mock this singleton, so when I do Foo.Instance I get a different type of instance. The problem is that my checks are passing individually, but failing overall because one test is interfering with another. I tried to do a TestCleanup where I set: Foo_Accessor._instance = null; but that didn't work. What I really need is Foo_Accessor._instance.IsValueCreated = false; (_instance is a Lazy). Any way to unset the Lazy object that I didn't think of?

    Read the article

  • Can I use breakpoints (as while debugging) while 'unit testing' ?

    - by Richard77
    Hello, I'm walking through the FrontStore series tutorial on TDD in MVC (Part 3 by Rob Conery/ASP.NET). The test I'm concerned with is the CatalogRepository_Each_Category_Contains_5_Products(). Until I get to that test, everything was working fine. Now, I've gone through every line that makes this test (including the test itself, the TestCatalogRepository, ...). I've also compared my code to that of Rob, but the test keeps failing. I also checked the source code from CodePlex, that test was not there. Now, I wonder if I can put a break point somewhere to check the local values as the test is being executed? If not, something similar? Thanks for helping.

    Read the article

  • Unit Testing Example, on a class with required fields?

    - by Mastro
    I'm new to developing Unit test and I can't find an example of how to test an existing class I have. The class has a save method which does an insert or update in the database when the user clicks Save in the UI. But the save method has required fields that need to be populated. And has other fields that do not. So how can I run this test properly? Was trying to write it out.. Give a user When user saves object Then Field1 is required then Field2 is required Then Field3 is required WhenUserSavesObject() object = new object object.field1 IsNot Nothing something like that right? And what about the other fields that are optional? How would I test the save method to make sure it takes all those values properly? Was trying to use BDD but not sure if I should try it or not. Can't find any example of classes with many properties that are needed when calling a test method.

    Read the article

  • Tool to automate basic connectivity testing

    - by feicipet
    After our vendors have setup a certain test environment, we need to go in to perform connectivity testing between PC to servers and also between servers. The problem is that we run a range of tests to telnet between 2 nodes on several ports and this is a manual and rather tedious process. Does anyone know of a small tool or script that I can take input on the range of ports to be test and will run an automated range of testing against those ports? All I need to do is to validate whether a TCP connection can be established from the source PC / server to the target server IP address / port. Thanks, Wong

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • Unit testing of static library that involves NSDocumentDirectory and other iOS App specific calls.

    - by Shiun
    Hi, I'm attempting to run unit tests for a static library that attempts to create/write/read a file in the document directory. Since this is a static library and not an application for the iOS, attempts to reference the NSDocumentDirectory is returning me directory for the form "/Users//Library/Application Support/iPhone Simulator/Documents" This directory does not exist. When attempting to access a directory from an actual application, the NSDocumentDirectory returns something of the form: "/Users//Library/Application Support/iPhone Simulator/4.2/FEDBEF5F-1326-4383-A087-CDA1B865E61A/Documents" (Please note the simulator version as well as application ID as part of the path) How can I overcome this shortcoming in the unit test framework for static libraries that implement tests that require iOS app specific calls? Thanks in advance.

    Read the article

  • junit testing with mockobjects

    - by Codenotguru
    we are planning to bring Junit testing into our project and we just realised that to do junit testing we need to create lot of mockobjects. Can anyone suggest me a good tool or framework that can create these mockobjects for the classes that i will be performing unit testing?

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Can I ensure all tests contain an assertion in test/unit?

    - by Andrew Grimm
    With test/unit, and minitest, is it possible to fail any test that doesn't contain an assertion, or would monkey-patching be required (for example, checking if the assertion count increased after each test was executed)? Background: I shouldn't write unit tests without assertions - at a minimum, I should use assert_nothing_raised if I'm smoke testing to indicate that I'm smoke testing. Usually I write tests that fail first, but I'm writing some regression tests. Alternatively, I could supply an incorrect expected value to see if the test is comparing the expected and actual value.

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • Howto install google-mock on Ubuntu 12.10

    - by user1459339
    I am having hard time trying to install Google C++ Mocking Framework. I have successfully run sudo apt-get install google-mock. Then I tried to compile this sample file #include "gmock/gmock.h" int main(int argc, char** argv) { ::testing::InitGoogleMock(&argc, argv); return RUN_ALL_TESTS(); } with g++ -lgmock main.cpp and these errors have shown main.cpp:(.text+0x1e): undefined reference to `testing::InitGoogleMock(int*, char**)' main.cpp:(.text+0x23): undefined reference to `testing::UnitTest::GetInstance()' main.cpp:(.text+0x2b): undefined reference to `testing::UnitTest::Run()' collect2: error: ld returned 1 exit status I guess the linker can not find the library files. Does anybody know how to fix this?

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • genetic algorithm for leveling/build test

    - by Renan Malke Stigliani
    I'm starting o build a online PVP (duel like, one-to-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I never did anything like that, I'm still thinking about the maths behind the level/skill/special balances. So I thought good way of testing the best/combo builds would implement a Genetic Algorith. It'd be like that: Generate a big portion of random characters Make them fight, level them up accordingly to the victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try to make even best characters Add some more random chars, emulating new players Repeat the process for some time, or util find some chars who can beat everyone butts So I could play with the math and try to find the balance where the top x% chars would be a mix of various build types. So, is it a good idea, or there are some other easier method to do the balance? PS: I like this also, because it sounds funny

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >