Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 7/467 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unit testing code paths

    - by Michael
    When unit testing using expectations, you define a set of method calls and corresponding results for those calls. These define the path through the method that you want to test. I have read that unit tests should not duplicate the code. But when you define these expectations, isn't that duplicating the code, or at least the process? How do you know when you're duplicating functionality under test?

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Testing Spring MVC Output

    - by Sammy
    Hello, So I have an MVC project that exposes my services in JSON format. What would be the ideal method of (unit?) testing whether or not my methods return correct content (in terms of JSON syntax as well)? Thank you, Sammy

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Automating GUI testing using C#

    - by ladar
    I am doing on a project to built automatic GUI testing for graphical application in .NET. I will use C# but i am trying to reading to get some ideas. But I don't have any idea on how to record and replay back. So can you give me your ideas.

    Read the article

  • Testing site performance with multiple browsers and versions

    - by jasongullickson
    We're trying to document the performance difference of our site using different browsers. We use LoadRunner for load testing but I don't see a way to specify the "browser engine" it uses to run it's tests (perhaps it's using it's own?). In any event I'm not sure that LoadRunner is the right tool for this job but we own it so if we can use it, great. If not, is there another tool out there that I can use to record a script and run it automatically against a site using several different browsers?

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

  • What's your approach to testing iPhone / iPad apps?

    - by R0MANARMY
    When developing for iPhone/iPad do you Do unit/integration/etc testing? What framework(s) do you use? What other framework(s) have you tried (if you decided to not use them, why not?) NOTE This is based on a question asked a few days ago (that has since been heavily edited). Question generated some interesting responses that may be useful to aggregate in one place.

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • SQLAuthority News – Download Whitepaper – Understanding and Controlling Parallel Query Processing in SQL Server

    - by pinaldave
    My recently article SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database has received many good comments regarding MAXDOP 1 and MAXDOP 0. I really enjoyed reading the comments as the comments are received from industry leaders and gurus. I was further researching on the subject and I end up on following white paper written by Microsoft. Understanding and Controlling Parallel Query Processing in SQL Server Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them. To review the document, please download the Understanding and Controlling Parallel Query Processing in SQL Server Word document. Note: Above abstract has been taken from here. The real question is what does the parallel queries has made life of DBA much simpler or is it looked at with potential issue related to degradation of the performance? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • VisualAssert Testing in C++, Loading a test fixture.

    - by C_Bevan
    Good day, I am learning Testing in Visual Studio C++ and I have several tutorials which I have followed. I am trying to load a test fixture. I have tried to put the test .cpp file in many different places but it will still not pick up on it when I click on "Run Tests" or "Run Tests without debugging" In the tutorials I found, they seemed to load into the Test Explorer automatically, but in mine is an icon with a X + (PROJECTNAME).EXE and when I hoover over it I get the process exited without registering with the agent... this is due to the model not containing any test fixtures... How can I load my tests into the Test Explorer...or register them with my project... I've tried right click and "Add Fixture...".... but that just starts a new test file and I have the same problem. Anybody know how I solve this issue?

    Read the article

  • Best practice Unit testing abstract classes?

    - by Paul Whelan
    Hello I was wondering what the best practice is for unit testing abstract classes and classes that extend abstract classes. Should I test the abstract class by extending it and stubbing out the abstract methods and then test all the concrete methods? Then only test the methods I override and the abstract methods in the unit tests for objects that extend my abstract class. Should I have an abstract test case that can be used to test the methods of the abstract class and extend this class in my test case for objects that extend the abstract class? EDIT: My abstract class has some concrete methods. I would be interested to see what people are using. Thanks Paul

    Read the article

  • Integration Testing an Entire *Existing* Application (w/ automatic execution of test suite)

    - by Ev
    Hi there, I have just joined a team working on an existing Java web app. I have been tasked with creating an automated integration test suite that should run when developers commit to our continuous integration server (TeamCity), which automatically deploys to our staging server - so really the tests will be run against our staging web app server. I have read a lot of stuff about automated integration testing with frameworks like Watir, Selenium and RWebSpec. I have created tests in all of these and while I prefer Watir, I am open to anything. The thing that hasn't become clear to me is how to create an entire test suite for an application, and how to have that suite execute in it's entirety upon execution of some script. I can happily create individual tests of varying complexity, but there is a gap in my knowledge about how to tie everything together into something useful. Does anyone have any advice on how to create a full test suite and have it execute automatically? Thanks!

    Read the article

  • SSIS Script Component Testing Strategy

    - by Paul Kohler
    This question is in respect to the script component specifically. I am aware of ssisUnit etc… With simple SSIS Scripts Components, it’s sufficient to let basic testing flesh out issues, however I am working with a script that has grown in complexity over time. To better test the functionality I am considering abstracting the script logic into a DLL that gets deployed with the package, and then use the custom component in the script. The advantage is that the function will be more testable etc but it’s one more deployment artefact that needs to be managed. My question is, does anyone know of a better way to test such an SSIS script in a more isolated manner than to run the whole package and examine the output?

    Read the article

  • Unit Testing functions within repository interfaces - ASP.net MVC3 & Moq

    - by RawryLions
    I'm getting into writing unit testing and have implemented a nice repository pattern/moq to allow me to test my functions without using "real" data. So far so good.. However.. In my repository interface for "Posts" IPostRepository I have a function: Post getPostByID(int id); I want to be able to test this from my Test class but cannot work out how. So far I am using this pattern for my tests: [SetUp] public void Setup() { mock = new Mock<IPostRepository>(); } [Test] public void someTest() { populate(10); //This populates the mock with 10 fake entries //do test here } In my function "someTest" I want to be able to call/test the function GetPostById. I can find the function with mock.object.getpostbyid but the "object" is null. Any help would be appreciated :) iPostRepository: public interface IPostRepository { IQueryable<Post> Posts {get;} void SavePost(Post post); Post getPostByID(int id); }

    Read the article

  • Parallel port blocking

    - by asalamon74
    I have a legacy Java program which handles a special card printer by sending binary data to the LPT1 port (no printer driver is involved, the Java program creates the binary stream). The program was working correctly with the client's old computer. The Java program sent all the bytes to the printer and after sending the last byte the program was not blocked. It took an other minute to finish the card printing, but the user was able to continue the work with the program. After changing the client's computer (but not the printer, or the Java program), the program does not finish the task till the card is ready, it is blocked until the last second. It seems to me that LPT1 has a different behavior now than was before. Is it possible to change this in Windows? I've checked BIOS for parallel port settings: The parallel port is set to EPP+ECP (but also tried the other two options: Bidirectional, Output only). Maybe some kind of parallel port buffer is too small? How can I increase it?

    Read the article

  • Read non-blocking from multiple fifos in parallel

    - by Ole Tange
    I sometimes sit with a bunch of output fifos from programs that run in parallel. I would like to merge these fifos. The naïve solution is: cat fifo* > output But this requires the first fifo to complete before reading the first byte from the second fifo, and this will block the parallel running programs. Another way is: (cat fifo1 & cat fifo2 & ... ) > output But this may mix the output thus getting half-lines in output. When reading from multiple fifos, there must be some rules for merging the files. Typically doing it on a line by line basis is enough for me, so I am looking for something that does: parallel_non_blocking_cat fifo* > output which will read from all fifos in parallel and merge the output on with a full line at a time. I can see it is not hard to write that program. All you need to do is: open all fifos do a blocking select on all of them read nonblocking from the fifo which has data into the buffer for that fifo if the buffer contains a full line (or record) then print out the line if all fifos are closed/eof: exit goto 2 So my question is not: can it be done? My question is: Is it done already and can I just install a tool that does this?

    Read the article

  • Are there any language agnostic unit testing frameworks?

    - by Bringer128
    I have always been skeptical of rewriting working code - porting code is no exception to this. However, with the advent of TDD and automated testing it is much more reasonable to rewrite and refactor code. Does anyone know if there is a TDD tool that can be used for porting old code? Ideally you could do the following: Write up language agnostic unit tests for the old code that pass (or fail if you find bugs!). Run unit tests on your other code base that fail. Write code in your new language that passes the tests without looking at the old code. The alternative would be to split step 1 into "Write up unit tests in language 1" and "Port unit tests to language 2", which significantly increases effort required and is difficult to justify if the old code base is going to stop being maintained after the port (that is, you don't get the benefit of continuous integration on this code base). EDIT: It's worth noting this question on StackOverflow.

    Read the article

  • How to remove the “AMD Testing use only” watermark from Ubuntu 12.10

    - by Lucio
    I've installed the latest catalyst driver (beta) following the step in this guide for Ubuntu Quantal Quetzal. My system is 64 bit and my graphic card is an ATI RadeonHD 6670, this g.c. is Officially Supported (Catalyst & Open Source), you can confirm that from this AMD Linux Community thread. I don't have any problem, except the AMD testing use only watermark. I see the following frame in any stage into the OS (logged, unlloged, etc.) except in the terminals. I found different versions of how to remove this image, but this change according to the system, so I want an answer from this popular (trusted) site. How to solve this issue in Ubuntu 12.10 32b? This procedure is different in a 64b system?

    Read the article

  • Networking Programs Suitable For Symbolic Testing

    - by Milen
    Symbolic execution has been successfully used to test programs and automatically generate test cases. I've been working on my master's thesis that allows the testing of arbitrary networked programs (i.e., those communicating via sockets). Now that we have a working symbolic execution engine that has support for sockets, we're looking for real-world pieces of software to test. Our engine has an important restriction (at the moment): it cannot execute multi-threaded programs. So, we're looking for programs that satisfy the criteria outlined below: Written in C Communicates via sockets (TCP / UDP are supported) Does not rely on the filesystem to get the "job" done Runs on Linux Does not use multi-threading Source is available (so that we can compile them to LLVM bytecode) Most programs that would fall under the criteria would probably be implementations of distributed protocols solving a particular problem (e.g., consensus). Any suggestions are greatly appreciated.

    Read the article

  • Is unit testing or test-driven development worthwhile?

    - by Owen Johnson
    My team at work is moving to Scrum and other teams are starting to do test-driven development using unit tests and user acceptance tests. I like the UATs, but I'm not sold on unit testing for test-driven development or test-driven development in general. It seems like writing tests is extra work, gives people a crutch when they write the real code, and might not be effective very often. I understand how unit tests work and how to write them, but can anyone make the case that it's really a good idea and worth the effort and time? Also, is there anything that makes TDD especially good for Scrum?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >