Search Results

Search found 10178 results on 408 pages for 'testing metaprogramming'.

Page 33/408 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • System testing hangs inexplicably

    - by Jamess
    I read that I can upload system testing reports to ubuntu site and was excited with it. But my last three efforts looks like gives me a hung 'system testing' process or it appears so for about an hour each. How I can find out what is happening and if it indeed hung? https://launchpad.net/+login says I am already logged in, but I do not see any progress (or even unable to close the window as well) I am attaching the Screen shot as well:

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • On The Question Of Automated Website Testing

    Almost all webmasters (or at least "quite a lot of webmasters") have heard about the significance of website testing before the production. Having developed a website or a web application, most authors want to publish it immediately and see how people like it. If they ignore prior website testing, the project may appear unprepared for real Internet activity and reveal awful performance.

    Read the article

  • Unit Testing Myths and Practices

    We all understand the value of Unit Testing, but how come so few organisations maintain unit tests for their in-house applications? We can no longer pretend that unit testing is a universal panacea for ensuring less-buggy applications. Instead, we should be prepared to actively justify the use of unit tests, and be more savvy about where in the development cycle the unit test resources should be most effectively used.

    Read the article

  • Oracle Application Testing Suite 12.1 ??????????

    - by user773457
    Oracle Application Testing Suite 12.1 ??????????? http://japanmediacentre.oracle.com/content/detail.aspx?ReleaseID=1692???????????????? ?JD Edwards EnterpriseOne????????????????????????????????????? ?Oracle Application Testing Suite 12.1????????????????????????(IE9?Linux) ??????????????Oracle Enterprise Manager????????Oracle Cloud???????????????? ???????SQL?????????SELECT???????????????????????????? ??????????????????????????????????? ???? ATS-Tech ??????????

    Read the article

  • Unit test complex classes with many private methods

    - by Simon G
    Hi, I've got a class with one public method and many private methods which are run depending on what parameter are passed to the public method so my code looks something like: public class SomeComplexClass { IRepository _repository; public SomeComplexClass() this(new Repository()) { } public SomeComplexClass(IRepository repository) { _repository = repository; } public List<int> SomeComplexCalcualation(int option) { var list = new List<int>(); if (option == 1) list = CalculateOptionOne(); else if (option == 2) list = CalculateOptionTwo(); else if (option == 3) list = CalculateOptionThree(); else if (option == 4) list = CalculateOptionFour(); else if (option == 5) list = CalculateOptionFive(); return list; } private List<int> CalculateOptionOne() { // Some calculation } private List<int> CalculateOptionTwo() { // Some calculation } private List<int> CalculateOptionThree() { // Some calculation } private List<int> CalculateOptionFour() { // Some calculation } private List<int> CalculateOptionFive() { // Some calculation } } I've thought of a few ways to test this class but all of them seem overly complex or expose the methods more than I would like. The options so far are: Set all the private methods to internal and use [assembly: InternalsVisibleTo()] Separate out all the private methods into a separate class and create an interface. Make all the methods virtual and in my tests create a new class that inherits from this class and override the methods. Are there any other options for testing the above class that would be better that what I've listed? If you would pick one of the ones I've listed can you explain why? Thanks

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • What's the use of writing tests matching configuration-like code line by line?

    - by Pascal Van Hecke
    Hi, I have been wondering about the usefulness of writing tests that match code one-by-one. Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using: resources :products BDD/TDD proscribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/ class RoutingTest < ActionController::TestCase # simple should_map_resources :products end I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails. I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code... When I ask around, people answer me that: "the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line" What are your thoughts?

    Read the article

  • How can I split abstract testcases in JUnit?

    - by Willi Schönborn
    I have an abstract testcase "AbstractATest" for an interface "A". It has several test methods (@Test) and one abstract method: protected abstract A unit(); which provides the unit under testing. No i have multiple implementations of "A", e.g. "DefaultA", "ConcurrentA", etc. My problem: The testcase is huge (~1500 loc) and it's growing. So i wanted to split it into multiple testcases. How can organize/structure this in Junit 4 without the need to have a concrete testcase for every implementation and abstract testcase. I want e.g. "AInitializeTest", "AExectueTest" and "AStopTest". Each being abstract and containing multiple tests. But for my concrete "ConcurrentA", i only want to have one concrete testcase "ConcurrentATest". I hope my "problem" is clear. EDIT Looks like my description was not that clear. Is it possible to pass a reference to a test? I know parameterized tests, but these require static methods, which is not applicable to my setup. Subclasses of an abstract testcase decide about the parameter.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • What hardware factors may be considered bottlenecks on a Hyper-V virtual server during load testing?

    - by sean
    Our organization is load testing our application using virtual servers via Hyper-V to see what the user load can be using fair equipment on a single box setup. The developer group questioned the validity of the tests given the normal use of the box by the other virtual machines. IT admins answered that it is an acceptable platform to load test on because it has its own CPUs, memory and disks allocated. Is their answer mostly correct? What hardware factors may be considered bottle necks given the other virtual machines when testing our application? For example, would bus speed be a concern or network IO? The application consists of a windows service written using the 4.0 .NET Framework and SQL Server 2008 R2.

    Read the article

  • maven test cannot load cross-module resources/properties ?

    - by smallufo
    I have a maven mantained project with some modules . One module contains one XML file and one parsing class. Second module depends on the first module. There is a class that calls the parsing class in the first module , but maven seems cannot test the class in the second module. Maven test reports : java.lang.NullPointerException at java.util.Properties.loadFromXML(Properties.java:851) at foo.firstModule.Parser.<init>(Parser.java:92) at foo.secondModule.Program.<init>(Program.java:84) In "Parser.java" (in the first module) , it uses Properties and InputStream to read/parse an XML file : InputStream xmlStream = getClass().getResourceAsStream("Data.xml"); Properties properties = new Properties(); properties.loadFromXML(xmlStream); The "data.xml" is located in first module's resources/foo/firstModule directory , and it tests OK in the first module. It seems when testing the second module , maven cannot correctly load the Data.xml in the first module . I thought I can solve the problem by using maven-dependency-plugin:unpack to solve it . In the second module's POM file , I add these snippets : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.1</version> <executions> <execution> <id>data-copying</id> <phase>test-compile</phase> <goals> <goal>unpack</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>foo</groupId> <artifactId>firstModule</artifactId> <type>jar</type> <includes>foo/firstModule/Data.xml</includes> <outputDirectory>${project.build.directory}/classes</outputDirectory> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> In this POM file , I try to unpack the first module , and copy the Data.xml to classes/foo/firstModule/ directory , and then run tests. And indeed , it is copied to the right directory , I can find the "Data.xml" file in "target/classes/foo/firstModule" directory. But maven test still complains it cannot read the File (Properties.loadFromXML() throws NPE). I don't know how to solve this problem. I tried other output directory , such as ${project.build.directory}/resources , and ${project.build.directory}/test-classes , but all in vain... Any advices now ? Thanks in advanced. Environments : Maven 2.2.1 , eclipse , m2eclipse

    Read the article

  • Oracle University Begins Beta Testing For New "Oracle Application Express Developer Certified Expert

    - by Paul Sorensen
    Oracle University has begun beta testing for the new Oracle Application Express Developer Certified Expert certification, which requires passing one exam - "Oracle Application Express 3.2: Developing Web Applications" exam (#1Z1-450).In this video, Marcie Young of Oracle Server Technologies takes you on a quick preview of what is on the exam, how to prepare, and what to expect: The "Oracle Application Express: Developing Web Applications" training course teaches many of of the key concepts that are tested in the exam. This course is not a requirement to take the exam, however it is highly recommended.Additionally, Marcie refers to several helpful resources that are highly recommended while preparing, including the Oracle Application Express hosted instance at apex.oracle.com and Oracle Application Express product page on OTN.You can take the "Oracle Application Express 3.2: Developing Web Applications" exam now for only $50 USD while it is in beta. Beta exams are an excellent way to directly provide your input into the final version of the certification exam as well as be one of the very first certified in the track. Furthermore - passing the beta counts for full final exam credit. Note that beta testing is offered for a limited time only.Register now at pearsonvue.com/oracle to take the exam at a Pearson VUE testing center nearest you.QUICK LINKSRegister For Exam: Pearson VUE About Certification Track: Oracle Application Express Developer Certified ExpertAbout Certification Exam: Oracle Application Express 3.2: Developing Web Applications (1Z1-450)Introductory Training (Recommended): "Oracle Application Express: Developing Web Applications"Advanced Training (Suggested): "Oracle Application Express: Advanced Workshop"Oracle Application Express Hosted Instance: apex.oracle.comOracle Application Express Product Page: on OTNLearn More: Oracle Certification Beta Exams

    Read the article

  • ALMing in Hinglish 2&ndash;Windows 8-Manual Testing Metro Style Apps using MTM11

    - by Tarun Arora
    What is ALMing in Hinglish => Introduction     ????? ?????? ??? ?????? ????, ?????? ??????? ?? ?????? ?????? ?? ????? ?????? ?????? 8 ?????? ?????? ??????????? ?? ?????? ???????? ?? ???? ???. ??? ???? ???????????? ????? ??????? 2011 ?? ?????? ?? ?? ???? ????? ?????? 8 ?????? ?????? ??????????? ?? ?????? ???????? ??. ALMing in Hinglish–Windows 8 Metro Style App manual testing using MTM11   In this second in the series of videos I bring to you Shubhra Maji who is a Program Manager on the Visual Studio dev tools team in Hyderabad along with the very seasoned Aditya Agarwal & Srishti Sridhar who have been working in the Visual Studio team from past several releases. The team wonderfully walks us through manually testing Metro Style Apps in Windows 8 using Microsoft Test Manager 2011. A great thank you for watching, if you have any questions/feedback/suggestions please contact us. Stay Tuned for more… Namaste!   You might also like - ALMing in Hinglish 1-Exploratory Testing in VS11 with Nivedita Bawa

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • not using partial mocking? do they also mean in web-app?

    - by 01
    Im learning Mockito and in chapter 16 they say you should not use partial mocking in new system. I disagree, for example in one of my actions i use partial mocking for static framework methods, sql calls, etc. I extracted the stuff into methods and then mock it in tests. Most of those methods are specific to this action and wont be call from other actions, so it not worth to extract special components. I agree that you shouldn't using partial mocking in frameworks, but not in hard to mock actions. What are minuses of using partial mocking in web-app?

    Read the article

  • How to penetrate the QA industry after layoffs, next steps...

    - by Erik
    Briefly, my background is in manual black box testing of websites and applications within the Agile/waterfall context. Over the past four years I was a member of two web development firms' small QA teams dedicated to testing the deployment of websites for national/international non profits, governmental organizations, and for profit business, to name a few: -Brookings Institution -Senate -Tyco Electronics -Blue Cross/Blue Shield -National Geographic -Discover Channel I have a very strong understanding of the: -SDLC -STLC of bugs and website deployment/development -Use Case & Test Case development In March of this year, my last firm downsized and lost my job as a QA tester. I have been networking and doing a very detailed job search, but have had a very difficult time getting my next job within the QA industry, even with my background as a manual black box QA tester in the website development context. My direct question to all of you: What are some ways I can be more competitive and get hired? Options that could get me competitive: Should I go back to school and learn some more 'hard' skills in website development and client side technologies, e.g.: -HTML -CSS -JavaScript Learn programming: -PHP -C# -Ruby -SQL -Python -Perl -?? Get Certified as a QA Tester, there are a countless numbers of programs to become a Certified Tester. Most, if not all jobs, being advertised now require Automated Testing experience, in: -QTP -Loadrunner -Selenium -ETC. Should I learn, Automated testing skills, via a paid course, or teach myself? --Learn scripting languages to understand the automated testing process better? Become a Certified "Project Management Professional" (PMP) to prove to hiring managers that I 'get' the project development life cycle? At the end of the day I need to be competitive and get hired as a QA tester and want to build upon my skills within the QA web development field. How should I do this, without reinventing the wheel? Any help in this regard would be fabulous. Thanks! .erik

    Read the article

  • How can I effectively test a scripting engine?

    - by ChaosPandion
    I have been working on an ECMAScript implementation and I am currently working on polishing up the project. As a part of this, I have been writing tests like the following: [TestMethod] public void ArrayReduceTest() { var engine = new Engine(); var request = new ExecScriptRequest(@" var a = [1, 2, 3, 4, 5]; a.reduce(function(p, c, i, o) { return p + c; }); "); var response = (ExecScriptResponse)engine.PostWithReply(request); Assert.AreEqual((double)response.Data, 15D); } The problem is that there are so many points of failure in this test and similar tests that it almost doesn't seem worth it. It almost seems like my effort would be better spent reducing coupling between modules. To write a true unit test I would have to assume something like this: [TestMethod] public void CommentTest() { const string toParse = "/*First Line\r\nSecond Line*/"; var analyzer = new LexicalAnalyzer(toParse); { Assert.IsInstanceOfType(analyzer.Next(), typeof(MultiLineComment)); Assert.AreEqual(analyzer.Current.Value, "First Line\r\nSecond Line"); } } Doing this would require me to write thousands of tests which once again does not seem worth it.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >