Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 36/456 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Automated BizTalk documentation

    - by Kevin Shyr
    Yay, this should help us going through old legacy app with no doc, at least some help. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://biztalkdocumenter.codeplex.com/

    Read the article

  • Collision detection doesn't work for automated elements in XNA 4.0

    - by NDraskovic
    I have a really weird problem. I made a 3D simulator of an "assembly line" as a part of a college project. Among other things it needs to be able to detect when a box object passes in front of sensor. I tried to solve this by making a model of a laser and checking if the box collides with it. I had some problems with BoundingSpheres of models meshes so I simply create a BoundingSphere and place it in the same place as the model. I organized them into a list of BoundingSpheres called "spheres" and for each model I create one BoundingSphere. All models except the box are static, so the box object has its own BoundingSphere (not a member of the "spheres" list). I also implemented a picking algorithm that I use to start the movement. This is the code that checks for collision: if (spheres.Count != 0) { for (int i = 1; i < spheres.Count; i++) { if (spheres[i].Intersects(PickingRay) != null && Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton) { start = true; break; } if (BoxSphere.Intersects(spheres[i]) && start) { MoveBox(0, false);//The MoveBox function receives the direction (0) and a bool value that dictates whether the box should move or not (false means stop) start = false; break; } if (start /*&& Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton*/ && !BoxSphere.Intersects(spheres[i])) { MoveBox(0, true); break; } } The problem is this: When I use the mouse to move the box (the commented part in the third if condition) the collision works fine (I have another part of code that I removed to simplify my question - it calculates the "address" of the box, and by that number I know that the collision is correct). But when I comment it (like in this example) the box just passes trough the lasers and does not detect the collision (the idea is that the box stops at each laser and the user passes it forth by clicking on the appropriate "switch"). Can you see the problem? Please help, and if you need more informations I will try to give them. Thanks

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OSB unit testing, part 1 by Qualogy

    - by JuergenKress
    First you need to implement the simple bpel process like this : In my current project, I inherited a lot of OSB components that have been developed by (former) team members, but they all lack unit tests. This is a situation I really dislike, since this makes it much harder to refactor or bug-fix the existing code base. So, for all newly created components (and components I have to bug-fix) I strive to add unit tests. Of course, the unit tests will be created using my favourite testing tool: soapUI ! Unit of test The unit test should be created for the service composition, which in OSB terms should be the proxy service combination with its business service. Now, since you do not want to rely on any other services, you should provide mock services for all services invoked from your Component-Under-Test. In a previous article, I wrote about mocking your services in soapUI. While this approach would also be valid here, creating a mock service (and certainly deploying it on a separate WebServer) does violate one of the core principles of unit testing: to make your unit tests as self-contained as possible, i.e. not depending on any external components. In this article, I will show you how to achieve this by simply providing a mock response inside your unit test. Scenario The scenario I implement for testing is a simple currency converter; the external request consists of a from and a to currency, and an amount (in currency from). The service will perform an exchange rate lookup using the WebServiceX CurrencyConverter and return a response to the caller consisting of both the source and target currencies and amounts. For the purpose of unit testing, I will implement a mock response for the exchange rate lookup. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Qualogy,OSB,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to get started with Automated Installer in Oracle Solaris 11

    - by unixman
    Hey all, I am pleased to make this year's Oracle OpenWorld Hands-on Lab exercises available for your review and use. These steps are written to demonstrate Oracle Solaris 11 Deployment technologies and are written with intentions of being usable and applicable well-beyond the 1 hour time-slot-on-a-laptop offered at OpenWorld.   If you're at OpenWorld and would like to join the session in-person, it is at 3:30 at the Marriott Marquis,  Yerba Buena 14 conference room. Please let me know what you think of these instructions!

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • System testing hangs inexplicably

    - by Jamess
    I read that I can upload system testing reports to ubuntu site and was excited with it. But my last three efforts looks like gives me a hung 'system testing' process or it appears so for about an hour each. How I can find out what is happening and if it indeed hung? https://launchpad.net/+login says I am already logged in, but I do not see any progress (or even unable to close the window as well) I am attaching the Screen shot as well:

    Read the article

  • Strangling the life out of Software Testing

    - by MarkPearl
    I recently did a course at the local university on Software Engineering. At the beginning of the course I looked over the outline of the subject and there seemed to be some really good content. It covered traditional & agile project methodologies, some general communication and modelling chapters and finished off with testing. I was particularly excited to see the section on testing as this was something I learnt on my own and see great value in. The course has now just ended and I am very disappointed. I now know one of the reasons why so few people i.e. in my region do Test Driven Development, or perform even basic testing methodologies. The topic was to academic! Yes, you might be able to list 4 different types of black box test approaches vs. white box test approaches and describe the characteristics of Smoke Tests, but never during course did we see an example of an actual test or how it might be implemented! In fact, if I did not have personal experience of applying testing in actual projects, I wouldn’t even know what a unit test looked like. Now, what worries me is the following… It took us 6 months to cover the course material, other students more than likely came out of that course with little appreciation of the subject – in fact they now have a very complex view of what a test is – so complex that I think most of them will never attempt it again on their own. Secondly, imagine studying to be a dentist without ever actually seeing a tooth? Yes, you might be able to describe a tooth, and know what it is made out of – but nobody would want a dentist who has never seen a tooth to operate on them. Yet somehow we expect people studying software engineering to do the same? This is not right. Now, before I finish my rant let me say that I know this is not the same everywhere in the world, and that there needs to be a balance on practical implementation and academic understanding – I am just disappointed that this does not seem to be happening at the institution that I am currently studying at ;-( Please, if you happen to be a lecturer or teacher reading this post – a combination of theory and practical's goes a long way. We need to up the quality of software being produced and that starts at learner level!

    Read the article

  • iptables firewall to protect against automated entries

    - by Kenyana
    I am getting unusually large calls on my app. I have implemented CSRF Check over ajax and its working but am still getting so many calls. My guess is that someone has a script that is 'logged' in and making all these calls. Could someone please share a good iptables script for blocking ip's that run 10 calls to /controler/action in a second. I am using `/sbin/iptables -A INPUT -p tcp --syn --dport $port -m connlimit --connlimit-above N -j REJECT --reject-with tcp-reset save the changes see iptables-save man page, the following is redhat and friends specific command service iptables save` That is from cyberciti

    Read the article

  • Unit Testing Myths and Practices

    We all understand the value of Unit Testing, but how come so few organisations maintain unit tests for their in-house applications? We can no longer pretend that unit testing is a universal panacea for ensuring less-buggy applications. Instead, we should be prepared to actively justify the use of unit tests, and be more savvy about where in the development cycle the unit test resources should be most effectively used.

    Read the article

  • Oracle Application Testing Suite 12.1 ??????????

    - by user773457
    Oracle Application Testing Suite 12.1 ??????????? http://japanmediacentre.oracle.com/content/detail.aspx?ReleaseID=1692???????????????? ?JD Edwards EnterpriseOne????????????????????????????????????? ?Oracle Application Testing Suite 12.1????????????????????????(IE9?Linux) ??????????????Oracle Enterprise Manager????????Oracle Cloud???????????????? ???????SQL?????????SELECT???????????????????????????? ??????????????????????????????????? ???? ATS-Tech ??????????

    Read the article

  • Unit test complex classes with many private methods

    - by Simon G
    Hi, I've got a class with one public method and many private methods which are run depending on what parameter are passed to the public method so my code looks something like: public class SomeComplexClass { IRepository _repository; public SomeComplexClass() this(new Repository()) { } public SomeComplexClass(IRepository repository) { _repository = repository; } public List<int> SomeComplexCalcualation(int option) { var list = new List<int>(); if (option == 1) list = CalculateOptionOne(); else if (option == 2) list = CalculateOptionTwo(); else if (option == 3) list = CalculateOptionThree(); else if (option == 4) list = CalculateOptionFour(); else if (option == 5) list = CalculateOptionFive(); return list; } private List<int> CalculateOptionOne() { // Some calculation } private List<int> CalculateOptionTwo() { // Some calculation } private List<int> CalculateOptionThree() { // Some calculation } private List<int> CalculateOptionFour() { // Some calculation } private List<int> CalculateOptionFive() { // Some calculation } } I've thought of a few ways to test this class but all of them seem overly complex or expose the methods more than I would like. The options so far are: Set all the private methods to internal and use [assembly: InternalsVisibleTo()] Separate out all the private methods into a separate class and create an interface. Make all the methods virtual and in my tests create a new class that inherits from this class and override the methods. Are there any other options for testing the above class that would be better that what I've listed? If you would pick one of the ones I've listed can you explain why? Thanks

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • How to change the date/time in Python for all modules?

    - by Felix Schwarz
    When I write with business logic, my code often depends on the current time. For example the algorithm which looks at each unfinished order and checks if an invoice should be sent (which depends on the no of days since the job was ended). In these cases creating an invoice is not triggered by an explicit user action but by a background job. Now this creates a problem for me when it comes to testing: I can test invoice creation itself easily However it is hard to create an order in a test and check that the background job identifies the correct orders at the correct time. So far I found two solutions: In the test setup, calculate the job dates relative to the current date. Downside: The code becomes quite complicated as there are no explicit dates written anymore. Sometimes the business logic is pretty complex for edge cases so it becomes hard to debug due to all these relative dates. I have my own date/time accessor functions which I use throughout my code. In the test I just set a current date and all modules get this date. So I can simulate an order creation in February and check that the invoice is created in April easily. Downside: 3rd party modules do not use this mechanism so it's really hard to integrate+test these. The second approach was way more successful to me after all. Therefore I'm looking for a way to set the time Python's datetime+time modules return. Setting the date is usually enough, I don't need to set the current hour or second (even though this would be nice). Is there such a utility? Is there an (internal) Python API that I can use?

    Read the article

  • What's the use of writing tests matching configuration-like code line by line?

    - by Pascal Van Hecke
    Hi, I have been wondering about the usefulness of writing tests that match code one-by-one. Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using: resources :products BDD/TDD proscribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/ class RoutingTest < ActionController::TestCase # simple should_map_resources :products end I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails. I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code... When I ask around, people answer me that: "the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line" What are your thoughts?

    Read the article

  • How can I split abstract testcases in JUnit?

    - by Willi Schönborn
    I have an abstract testcase "AbstractATest" for an interface "A". It has several test methods (@Test) and one abstract method: protected abstract A unit(); which provides the unit under testing. No i have multiple implementations of "A", e.g. "DefaultA", "ConcurrentA", etc. My problem: The testcase is huge (~1500 loc) and it's growing. So i wanted to split it into multiple testcases. How can organize/structure this in Junit 4 without the need to have a concrete testcase for every implementation and abstract testcase. I want e.g. "AInitializeTest", "AExectueTest" and "AStopTest". Each being abstract and containing multiple tests. But for my concrete "ConcurrentA", i only want to have one concrete testcase "ConcurrentATest". I hope my "problem" is clear. EDIT Looks like my description was not that clear. Is it possible to pass a reference to a test? I know parameterized tests, but these require static methods, which is not applicable to my setup. Subclasses of an abstract testcase decide about the parameter.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Basic WCF Unit Testing

    - by Brian
    Coming from someone who loves the KISS method, I was surprised to find that I was making something entirely too complicated. I know, shocker right? Now I'm no unit testing ninja, and not really a WCF ninja either, but had a desire to test service calls without a) going to a database, or b) making sure that the entire WCF infrastructure was tip top. Who does? It's not the environment I want to test, just the logic I’ve written to ensure there aren't any side effects. So, for the K.I.S.S. method: Assuming that you're using a WCF service library (you are using service libraries correct?), it's really as easy as referencing the service library, then building out some stubs for bunking up data. The service contract We’ll use a very basic service contract, just for getting and updating an entity. I’ve used the default “CompositeType” that is in the template, handy only for examples like this. I’ve added an Id property and overridden ToString and Equals. [ServiceContract] public interface IMyService { [OperationContract] CompositeType GetCompositeType(int id); [OperationContract] CompositeType SaveCompositeType(CompositeType item); [OperationContract] CompositeTypeCollection GetAllCompositeTypes(); } The implementation When I implement the service, I want to be able to send known data into it so I don’t have to fuss around with database access or the like. To do this, I first have to create an interface for my data access: public interface IMyServiceDataManager { CompositeType GetCompositeType(int id); CompositeType SaveCompositeType(CompositeType item); CompositeTypeCollection GetAllCompositeTypes(); } For the purposes of this we can ignore our implementation of the IMyServiceDataManager interface inside of the service. Pretend it uses LINQ to Entities to map its data, or maybe it goes old school and uses EntLib to talk to SQL. Maybe it talks to a tape spool on a mainframe on the third floor. It really doesn’t matter. That’s the point. So here’s what our service looks like in its most basic form: public CompositeType GetCompositeType(int id) { //sanity checks if (id == 0) throw new ArgumentException("id cannot be zero."); return _dataManager.GetCompositeType(id); } public CompositeType SaveCompositeType(CompositeType item) { return _dataManager.SaveCompositeType(item); } public CompositeTypeCollection GetAllCompositeTypes() { return _dataManager.GetAllCompositeTypes(); } But what about the datamanager? The constructor takes care of that. I don’t want to expose any testing ability in release (or the ability for someone to swap out my datamanager) so this is what we get: IMyServiceDataManager _dataManager; public MyService() { _dataManager = new MyServiceDataManager(); } #if DEBUG public MyService(IMyServiceDataManager dataManager) { _dataManager = dataManager; } #endif The Stub Now it’s time for the rubber to meet the road… Like most guys that ever talk about unit testing here’s a sample that is painting in *very* broad strokes. The important part however is that within the test project, I’ve created a bunk (unit testing purists would say stub I believe) object that implements my IMyServiceDataManager so that I can deal with known data. Here it is: internal class FakeMyServiceDataManager : IMyServiceDataManager { internal FakeMyServiceDataManager() { Collection = new CompositeTypeCollection(); Collection.AddRange(new CompositeTypeCollection { new CompositeType { Id = 1, BoolValue = true, StringValue = "foo 1", }, new CompositeType { Id = 2, BoolValue = false, StringValue = "foo 2", }, new CompositeType { Id = 3, BoolValue = true, StringValue = "foo 3", }, }); } CompositeTypeCollection Collection { get; set; } #region IMyServiceDataManager Members public CompositeType GetCompositeType(int id) { if (id <= 0) return null; return Collection.SingleOrDefault(m => m.Id == id); } public CompositeType SaveCompositeType(CompositeType item) { var existing = Collection.SingleOrDefault(m => m.Id == item.Id); if (null != existing) { Collection.Remove(existing); } if (item.Id == 0) { item.Id = Collection.Count > 0 ? Collection.Max(m => m.Id) + 1 : 1; } Collection.Add(item); return item; } public CompositeTypeCollection GetAllCompositeTypes() { return Collection; } #endregion } So it’s tough to see in this example why any of this is necessary, but in a real world application you would/should/could be applying much more logic within your service implementation. This all serves to ensure that between refactorings etc, that it doesn’t send sparking cogs all about or let the blue smoke out. Here’s a simple test that brings it all home, remember, broad strokes: [TestMethod] public void MyService_GetCompositeType_ExpectedValues() { FakeMyServiceDataManager fake = new FakeMyServiceDataManager(); MyService service = new MyService(fake); CompositeType expected = fake.GetCompositeType(1); CompositeType actual = service.GetCompositeType(2); Assert.AreEqual<CompositeType>(expected, actual, "Objects are not equal. Expected: {0}; Actual: {1};", expected, actual); } Summary That’s really all there is to it. You could use software x or framework y to do the exact same thing, but in my case I just didn’t really feel like it. This speaks volumes to my not yet ninja unit testing prowess.

    Read the article

  • Automated testing of a website for IE7 javascript errors?

    - by Andreas Bonini
    This week I decided to add a new element to a javascript array by copying a similar one from a previous line; unfortunately I forgot to remove the comma so the end result was something like var a = [1, 2, 3,]. The code went live late Friday afternoon just before everyone left for the week-end, and it completely broke everything in Internet Explorer 7 (and lower I assume) since it's such a great browser. Since there was no one to read emails (week-end) it went unnoticed for quite a while, and I really don't want something like this to happen again (especially in my code).. This is not the first of weird IE7 problems; I was wondering if there was a way to automatically test key pages looking for javascript or css errors, or really anything that IE8 would output in its new console in development tools. If there isn't, what do you usually do? You test the website after every change with all the browsers you support? (Something I'll do from now, at least for IE, if there is no way to run automated tests)

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

  • What hardware factors may be considered bottlenecks on a Hyper-V virtual server during load testing?

    - by sean
    Our organization is load testing our application using virtual servers via Hyper-V to see what the user load can be using fair equipment on a single box setup. The developer group questioned the validity of the tests given the normal use of the box by the other virtual machines. IT admins answered that it is an acceptable platform to load test on because it has its own CPUs, memory and disks allocated. Is their answer mostly correct? What hardware factors may be considered bottle necks given the other virtual machines when testing our application? For example, would bus speed be a concern or network IO? The application consists of a windows service written using the 4.0 .NET Framework and SQL Server 2008 R2.

    Read the article

  • maven test cannot load cross-module resources/properties ?

    - by smallufo
    I have a maven mantained project with some modules . One module contains one XML file and one parsing class. Second module depends on the first module. There is a class that calls the parsing class in the first module , but maven seems cannot test the class in the second module. Maven test reports : java.lang.NullPointerException at java.util.Properties.loadFromXML(Properties.java:851) at foo.firstModule.Parser.<init>(Parser.java:92) at foo.secondModule.Program.<init>(Program.java:84) In "Parser.java" (in the first module) , it uses Properties and InputStream to read/parse an XML file : InputStream xmlStream = getClass().getResourceAsStream("Data.xml"); Properties properties = new Properties(); properties.loadFromXML(xmlStream); The "data.xml" is located in first module's resources/foo/firstModule directory , and it tests OK in the first module. It seems when testing the second module , maven cannot correctly load the Data.xml in the first module . I thought I can solve the problem by using maven-dependency-plugin:unpack to solve it . In the second module's POM file , I add these snippets : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.1</version> <executions> <execution> <id>data-copying</id> <phase>test-compile</phase> <goals> <goal>unpack</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>foo</groupId> <artifactId>firstModule</artifactId> <type>jar</type> <includes>foo/firstModule/Data.xml</includes> <outputDirectory>${project.build.directory}/classes</outputDirectory> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> In this POM file , I try to unpack the first module , and copy the Data.xml to classes/foo/firstModule/ directory , and then run tests. And indeed , it is copied to the right directory , I can find the "Data.xml" file in "target/classes/foo/firstModule" directory. But maven test still complains it cannot read the File (Properties.loadFromXML() throws NPE). I don't know how to solve this problem. I tried other output directory , such as ${project.build.directory}/resources , and ${project.build.directory}/test-classes , but all in vain... Any advices now ? Thanks in advanced. Environments : Maven 2.2.1 , eclipse , m2eclipse

    Read the article

  • Oracle University Begins Beta Testing For New "Oracle Application Express Developer Certified Expert

    - by Paul Sorensen
    Oracle University has begun beta testing for the new Oracle Application Express Developer Certified Expert certification, which requires passing one exam - "Oracle Application Express 3.2: Developing Web Applications" exam (#1Z1-450).In this video, Marcie Young of Oracle Server Technologies takes you on a quick preview of what is on the exam, how to prepare, and what to expect: The "Oracle Application Express: Developing Web Applications" training course teaches many of of the key concepts that are tested in the exam. This course is not a requirement to take the exam, however it is highly recommended.Additionally, Marcie refers to several helpful resources that are highly recommended while preparing, including the Oracle Application Express hosted instance at apex.oracle.com and Oracle Application Express product page on OTN.You can take the "Oracle Application Express 3.2: Developing Web Applications" exam now for only $50 USD while it is in beta. Beta exams are an excellent way to directly provide your input into the final version of the certification exam as well as be one of the very first certified in the track. Furthermore - passing the beta counts for full final exam credit. Note that beta testing is offered for a limited time only.Register now at pearsonvue.com/oracle to take the exam at a Pearson VUE testing center nearest you.QUICK LINKSRegister For Exam: Pearson VUE About Certification Track: Oracle Application Express Developer Certified ExpertAbout Certification Exam: Oracle Application Express 3.2: Developing Web Applications (1Z1-450)Introductory Training (Recommended): "Oracle Application Express: Developing Web Applications"Advanced Training (Suggested): "Oracle Application Express: Advanced Workshop"Oracle Application Express Hosted Instance: apex.oracle.comOracle Application Express Product Page: on OTNLearn More: Oracle Certification Beta Exams

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >