Search Results

Search found 10033 results on 402 pages for 'fuzz testing'.

Page 6/402 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Testing Spring MVC Output

    - by Sammy
    Hello, So I have an MVC project that exposes my services in JSON format. What would be the ideal method of (unit?) testing whether or not my methods return correct content (in terms of JSON syntax as well)? Thank you, Sammy

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Automating GUI testing using C#

    - by ladar
    I am doing on a project to built automatic GUI testing for graphical application in .NET. I will use C# but i am trying to reading to get some ideas. But I don't have any idea on how to record and replay back. So can you give me your ideas.

    Read the article

  • Testing site performance with multiple browsers and versions

    - by jasongullickson
    We're trying to document the performance difference of our site using different browsers. We use LoadRunner for load testing but I don't see a way to specify the "browser engine" it uses to run it's tests (perhaps it's using it's own?). In any event I'm not sure that LoadRunner is the right tool for this job but we own it so if we can use it, great. If not, is there another tool out there that I can use to record a script and run it automatically against a site using several different browsers?

    Read the article

  • What's your approach to testing iPhone / iPad apps?

    - by R0MANARMY
    When developing for iPhone/iPad do you Do unit/integration/etc testing? What framework(s) do you use? What other framework(s) have you tried (if you decided to not use them, why not?) NOTE This is based on a question asked a few days ago (that has since been heavily edited). Question generated some interesting responses that may be useful to aggregate in one place.

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • VisualAssert Testing in C++, Loading a test fixture.

    - by C_Bevan
    Good day, I am learning Testing in Visual Studio C++ and I have several tutorials which I have followed. I am trying to load a test fixture. I have tried to put the test .cpp file in many different places but it will still not pick up on it when I click on "Run Tests" or "Run Tests without debugging" In the tutorials I found, they seemed to load into the Test Explorer automatically, but in mine is an icon with a X + (PROJECTNAME).EXE and when I hoover over it I get the process exited without registering with the agent... this is due to the model not containing any test fixtures... How can I load my tests into the Test Explorer...or register them with my project... I've tried right click and "Add Fixture...".... but that just starts a new test file and I have the same problem. Anybody know how I solve this issue?

    Read the article

  • Best practice Unit testing abstract classes?

    - by Paul Whelan
    Hello I was wondering what the best practice is for unit testing abstract classes and classes that extend abstract classes. Should I test the abstract class by extending it and stubbing out the abstract methods and then test all the concrete methods? Then only test the methods I override and the abstract methods in the unit tests for objects that extend my abstract class. Should I have an abstract test case that can be used to test the methods of the abstract class and extend this class in my test case for objects that extend the abstract class? EDIT: My abstract class has some concrete methods. I would be interested to see what people are using. Thanks Paul

    Read the article

  • SSIS Script Component Testing Strategy

    - by Paul Kohler
    This question is in respect to the script component specifically. I am aware of ssisUnit etc… With simple SSIS Scripts Components, it’s sufficient to let basic testing flesh out issues, however I am working with a script that has grown in complexity over time. To better test the functionality I am considering abstracting the script logic into a DLL that gets deployed with the package, and then use the custom component in the script. The advantage is that the function will be more testable etc but it’s one more deployment artefact that needs to be managed. My question is, does anyone know of a better way to test such an SSIS script in a more isolated manner than to run the whole package and examine the output?

    Read the article

  • Integration Testing an Entire *Existing* Application (w/ automatic execution of test suite)

    - by Ev
    Hi there, I have just joined a team working on an existing Java web app. I have been tasked with creating an automated integration test suite that should run when developers commit to our continuous integration server (TeamCity), which automatically deploys to our staging server - so really the tests will be run against our staging web app server. I have read a lot of stuff about automated integration testing with frameworks like Watir, Selenium and RWebSpec. I have created tests in all of these and while I prefer Watir, I am open to anything. The thing that hasn't become clear to me is how to create an entire test suite for an application, and how to have that suite execute in it's entirety upon execution of some script. I can happily create individual tests of varying complexity, but there is a gap in my knowledge about how to tie everything together into something useful. Does anyone have any advice on how to create a full test suite and have it execute automatically? Thanks!

    Read the article

  • Unit Testing functions within repository interfaces - ASP.net MVC3 & Moq

    - by RawryLions
    I'm getting into writing unit testing and have implemented a nice repository pattern/moq to allow me to test my functions without using "real" data. So far so good.. However.. In my repository interface for "Posts" IPostRepository I have a function: Post getPostByID(int id); I want to be able to test this from my Test class but cannot work out how. So far I am using this pattern for my tests: [SetUp] public void Setup() { mock = new Mock<IPostRepository>(); } [Test] public void someTest() { populate(10); //This populates the mock with 10 fake entries //do test here } In my function "someTest" I want to be able to call/test the function GetPostById. I can find the function with mock.object.getpostbyid but the "object" is null. Any help would be appreciated :) iPostRepository: public interface IPostRepository { IQueryable<Post> Posts {get;} void SavePost(Post post); Post getPostByID(int id); }

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

  • Are there any language agnostic unit testing frameworks?

    - by Bringer128
    I have always been skeptical of rewriting working code - porting code is no exception to this. However, with the advent of TDD and automated testing it is much more reasonable to rewrite and refactor code. Does anyone know if there is a TDD tool that can be used for porting old code? Ideally you could do the following: Write up language agnostic unit tests for the old code that pass (or fail if you find bugs!). Run unit tests on your other code base that fail. Write code in your new language that passes the tests without looking at the old code. The alternative would be to split step 1 into "Write up unit tests in language 1" and "Port unit tests to language 2", which significantly increases effort required and is difficult to justify if the old code base is going to stop being maintained after the port (that is, you don't get the benefit of continuous integration on this code base). EDIT: It's worth noting this question on StackOverflow.

    Read the article

  • How to remove the “AMD Testing use only” watermark from Ubuntu 12.10

    - by Lucio
    I've installed the latest catalyst driver (beta) following the step in this guide for Ubuntu Quantal Quetzal. My system is 64 bit and my graphic card is an ATI RadeonHD 6670, this g.c. is Officially Supported (Catalyst & Open Source), you can confirm that from this AMD Linux Community thread. I don't have any problem, except the AMD testing use only watermark. I see the following frame in any stage into the OS (logged, unlloged, etc.) except in the terminals. I found different versions of how to remove this image, but this change according to the system, so I want an answer from this popular (trusted) site. How to solve this issue in Ubuntu 12.10 32b? This procedure is different in a 64b system?

    Read the article

  • Networking Programs Suitable For Symbolic Testing

    - by Milen
    Symbolic execution has been successfully used to test programs and automatically generate test cases. I've been working on my master's thesis that allows the testing of arbitrary networked programs (i.e., those communicating via sockets). Now that we have a working symbolic execution engine that has support for sockets, we're looking for real-world pieces of software to test. Our engine has an important restriction (at the moment): it cannot execute multi-threaded programs. So, we're looking for programs that satisfy the criteria outlined below: Written in C Communicates via sockets (TCP / UDP are supported) Does not rely on the filesystem to get the "job" done Runs on Linux Does not use multi-threading Source is available (so that we can compile them to LLVM bytecode) Most programs that would fall under the criteria would probably be implementations of distributed protocols solving a particular problem (e.g., consensus). Any suggestions are greatly appreciated.

    Read the article

  • Is unit testing or test-driven development worthwhile?

    - by Owen Johnson
    My team at work is moving to Scrum and other teams are starting to do test-driven development using unit tests and user acceptance tests. I like the UATs, but I'm not sold on unit testing for test-driven development or test-driven development in general. It seems like writing tests is extra work, gives people a crutch when they write the real code, and might not be effective very often. I understand how unit tests work and how to write them, but can anyone make the case that it's really a good idea and worth the effort and time? Also, is there anything that makes TDD especially good for Scrum?

    Read the article

  • A testing feedback/report tool?

    - by Mert
    I'm thinking of developing a pluggable test and assessment module. This tool will be used especially for desktop application projects to report and log errors, bugs, missing features and suggestions from testers. The tool will be plugged to the application by putting a small icon to the application itself. When pressed the tool will be visible where user can create entries about the application. Is there already a tool like that? I am not speaking about UI testing btw. For example, this tool might have a form consisting of Page name Environment information Entry type (can be bug, feature request, suggestion) Message User Info (name, contact etc) Date I think such a tool can greatly help testers prepare reports. Developers can understand the issue better and track all the reports.

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • How to improve testing your own code

    - by Peter
    Hi guys, Today I checked in a change on some code which turned out to be not working at all due to something rather stupid yet very crucial. I feel really bad about it and I hope I finally learn something from it. The stupid thing is, I've done these things before and I always tell myself, next time I won't be so stupid... Then it happens again and I feel even worse about it. I know you should keep your chin up and learn from your mistakes but here's the thing: I try to improve myself, I just don't see how I can prevent these things from happening. So, now I'm asking you guys: Do you have certain groundrules when testing your code?

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Scrum got specific ways for testing software?

    - by joker13
    When reading Scrum Guide, as the official text for scrum, I find out there is no specific solution to provide software testing in scrum. (the only hint is on page15) I'm a little vague on whether scrum is considered a software development methodology or not? If it is not, then how come some of its practices opposes Extreme Programming? (I know that in scrum guide, the author notes that scrum is a framework not a methodology, but still I'm not pretty clear on that) And what's more, I'm not sure if there are any other important textbook that I'm missing so far about scrum. I need them to be official or of great deal of public acceptance.

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • Testing web applications written in java

    - by Vinoth Kumar
    How do you test the web applications (both server side and client side code)? The testing method has to work irrespective of the framework used (struts, spring web mvc) etc. I am using Java for the server side code, Javascript and HTML for the client side code. This is the sample test case of what I am talking about: 1. When you click on a link, the pop up opens. 2. Change some value in the pop up (say a drop down value) and it gets saved in the DB. 3. Click the popup again, you get the changed values. Can we simulate this kind of thing using unit test cases? Is JUnit enough for this?

    Read the article

  • Dog-1 testing as a synonym for integration / system test

    - by SkeetJon
    In the company I'm working for the phrase Dog-1 is used for the software testing scenario where the first piece of data passes through the complete system. Has anyone heard this phrase in a similar context? There's nothing on Google about it in a software development context, so I'm guessing its a idiosyncrasy of my current environment. Is there a recognized phrase in the industry for this kind of test? I don't think it's simply a 'system test' / 'integration test' as 'Dog-1' refers specifically to the first item/unit through the system.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >