Search Results

Search found 10055 results on 403 pages for 'penetration testing'.

Page 30/403 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Grails unit testing and bootstrap

    - by tbruyelle
    I wrote an unit test for a controller. I have a Bootstrap file which alter the metaclass of domain classes by adding a method asPublicMap(). I use this method in the controller to return domain classes as json but only some selected public fields. My unit test failed because of MissingMethodException for asPublicMap(). As I understood, bootstrap classes are not loaded for unit tests, only for integration tests. That's why I got this error. My question is : Is there another place to put metaclass manipulation in order to take them into account during unit tests ?

    Read the article

  • Unit testing a functions whose purposes is side effects

    - by David
    How would you unit test do_int_to_string_conversion? #include <string> #include <iostream> void do_int_to_string_conversion(int i, std::string& s) { switch(i) { case 1: s="1"; break; case 2: s="2"; break; default: s ="Nix"; } std::cout << s << "\n"; } int main(int argc, char** argv){ std::string little_s; do_int_to_string_conversion(1, little_s); do_int_to_string_conversion(2, little_s); do_int_to_string_conversion(3, little_s); }

    Read the article

  • Testing variable types in Python

    - by Jasper
    Hello, I'm creating an initialising function for the class 'Room', and found that the program wouldn't accept the tests I was doing on the input variables. Why is this? def __init__(self, code, name, type, size, description, objects, exits): self.code = code self.name = name self.type = type self.size = size self.description = description self.objects = objects self.exits = exits #Check for input errors: if type(self.code) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 110' elif type(self.name) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 111' elif type(self.type) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 112' elif type(self.size) != type(int()): print 'Error found in module rooms.py!' print 'Error number: 113' elif type(self.description) != type(str()): print 'Error found in module rooms.py!' print 'Error number: 114' elif type(self.objects) != type(list()): print 'Error found in module rooms.py!' print 'Error number: 115' elif type(self.exits) != type(tuple()): print 'Error found in module rooms.py!' print 'Error number: 116' When I run this I get this error: Traceback (most recent call last): File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/rooms.py", line 148, in <module> myRoom = Room(101, 'myRoom', 'Basic Room', 5, '<insert description>', myObjects, myExits) File "/Users/Jasper/Development/Programming/MyProjects/Game Making Challenge/Europa I/rooms.py", line 29, in __init__ if type(self.code) != type(str()): TypeError: 'str' object is not callable

    Read the article

  • Strategies for testing reactive, asynchronous code

    - by Arne
    I am developing a data-flow oriented domain-specific language. To simplify, let's just look at Operations. Operations have a number of named parameters and can be asked to compute their result using their current state. To decide when an Operation should produce a result, it gets a Decision that is sensitive to which parameter got a value from who. When this Decision decides that it is fulfilled, it emits a Signal using an Observer. An Accessor listens for this Signal and in turn calls the Result method of the Operation in order to multiplex it to the parameters of other Operations. So far, so good, nicely decoupled design, composable and reusable and, depending on the specific Observer used, as asynchronous as you want it to be. Now here's my problem: I would love to start coding actual Tests against this design. But with an asynchronous Observer... how should I know that the whole signal-and-parameters-plumbing worked? Do I need to use time outs while waiting for a Signal in order to say that it was emitted successfully or not? How can I be, formally, sure that the Signal will not be emitted if I just wait a little longer (halting problem? ;-)) And, how can I be sure that the Signal was emitted because it was me who set a parameter, and not another Operation? It might well be that my test comes to early and sees a Signal that was emitted way before my setting a parameter caused a Decision to emit it. Currently, I guess the trivial cases are easy to test, but as soon as I want to test complex many-to-many - situations between operations I must resort to hoping that the design Just Works (tm)...

    Read the article

  • Testing a broken IP.

    - by wreing
    I'm trying to test an application and I need to make an valid IP not respond from a one of my test servers but not the others. I could do this for an fqdn using /etc/hosts but I'd like to do it for an IP. Any suggestions?

    Read the article

  • C# InternalsVisibleTo() attribute for VBNET 2.0 while testing?

    - by Will Marcouiller
    I'm building an Active Directory wrapper in VBNET 2.0 (can't use later .NET) in which I have the following: IUtilisateur IGroupe IUniteOrganisation These interfaces are implemented in internal classes (Friend in VBNET), so that I want to implement a façade in order to instiate each of the interfaces with their internal classes. This will allow the architecture a better flexibility, etc. Now, I want to test these classes (Utilisateur, Groupe, UniteOrganisation) in a different project within the same solution. However, these classes are internal. I would like to be able to instantiate them without going through my façade, but only for these tests, nothing more. Here's a piece of code to illustrate it: public static class DirectoryFacade { public static IGroupe CreerGroupe() { return new Groupe(); } } // Then in code, I would write something alike: public partial class MainForm : Form { public MainForm() { IGroupe g = DirectoryFacade.CreerGroupe(); // Doing stuff with instance here... } } // My sample interface: public interface IGroupe { string Domaine { get; set; } IList<IUtilisateur> Membres { get; } } internal class Groupe : IGroupe { private IList<IUtilisateur> _membres; internal Groupe() { _membres = new List<IUtilisateur>(); } public string Domaine { get; set; } public IList<IUtilisateur> Membres { get { return _membres; } } } I heard of InternalsVisibleTo() attribute, recently. I was wondering whether it is available in VBNET 2.0/VS2005 so that I could access the assmebly's internal classes for my tests? Otherwise, how could I achieve this?

    Read the article

  • Android Service Testing with messages

    - by Sandeep Dhull
    I have a service which does its work(perform network operation) depending upon the type of message(message.what) property of the message. Then it returns the resoponse, also as a message to the requesting component(depending upon the message.replyTo). So, i am trying to write the testcases.. But how????? My Architecture of service is like this: 1) A component(ex. Activity) bounds to the service. 2) The component sends message to the Service(using Messenger). 3) The service has a nested class that handles the messages and execute the network call and returns a response as message to the sender(who initially sent the message and using its replyTo property). Now to test this.. i am using Junit test cases.. So , in that .. 1) in setUp() i am binding to the service.. 2) on testBusinessLogic() . i am sending the message to the service .. Now problem is where to get the response message.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • ISO 12207: Verification of integration and Unit test validation

    - by user970696
    I have received comments from the supervisor reviewing my thesis. He asked two questions I cannot answer right now: If ISO 12207 says under "Integration verification" that it "checks that components are correctly and completely integrated into a system", how this can be verified without testing, if all testing is validation? How without testing can I know that system is integrated correctly and fully? If unit testing is validation, how does it match the ISO definiton of validation "that requirements for intended use were fulfilled" if its so low level?

    Read the article

  • How Can I Point My Local Testing Server at My GitHub Repository?

    - by Goober
    Up until a few days ago, I had a particular setup that was as follows. Using SVN, all of the websites that I developed were committed to a source control drop box on a local testing server. Then using IIS, a new website was set up to point at the last revision of each particular website I developed and display it to the outside world using a specific URL. I have just moved over to using git and github, meaning all of my source controlled code is now no longer stored on a local testing server. As a result of this, I am not sure how I can go about doing a similar thing to what I did with the SVN setup, however I need to be able to essentially have that same setup again, just using Git. So basically, how can I go about getting my local testing server to point at the GitHub repository for that site? Help greatly appreciated.

    Read the article

  • What is the Optimal Server Configuration for Split-Path Testing?

    - by doug
    I am far from an expert on Apache or any server for that matter, so i apologize if this question is poorly worded, which it likely is. We have always relied on a vendor for split-path testing (aka "AB Testing"). If you're not familiar with that term, it's a form of marketing research in which you slightly modify one of your web pages (usually one nearest the point of conversion), say for instance, by changing the position of the "Buy Now" button or its color/contrast/texture, then serving one of those two pages to a given user based on random selection. By doing split-path testing ourselves, I suspect we can do it far more cheaply and increase cycle times as well. What is the optimal set-up for these tests? "Optimal" is based on the following criteria: how quickly/easily new tests can be set-up and put online; and minimal disruption to overall site performance

    Read the article

  • The art of Unit Testing with Examples in .NET

    - by outcoldman
    First time when I familiarized with unit testing was 5 or 6 years ago. It was start of my developing career. I remember that somebody told me about code coverage. At that time I didn’t write any Unit tests. Guy, who was my team lead, told me “Do you see operator if with three conditions? You should check all of these conditions”. So, after that I had written some code, I should go to interface and try to invoke all code which I wrote from user interface. Nice? At current time I know little more about tests and unit testing. I have not participated in projects, designed by Test Driven Development (TDD). Basics of my knowledge are a spying code of my colleagues, some articles and screencasts. I had decide that I should know much more, and became a real professional of unit testing, this is why I had start to read book The art of Unit Testing with Examples in .NET. More than, in my current job place looks like I’m just one who writing unit tests for my code. I should show good examples of my tests. ,a href="http://outcoldman.ru/en/blog/show/267"Read more...

    Read the article

  • Can you review my Perl rewrite of Cucumber?

    - by Evgeny
    There is a team working on acceptance testing X11 GUI application in our company, and they created a monstrous acceptance testing framework that drives the GUI as well as running scenarios. The framework is written using Perl 5, and scenario files look more like very complex Perl programs (thousands of lines long with procedural-programming style) than acceptance tests. I recently learned Ruby's Cucumber, and generally have been using Ruby for quite a lot of time. But unfortunately I can't just shove Ruby to replace Perl because the people who are writing all of this don't know Ruby and it's quite certain that they wont want "this" kind of interruption. So to bring Ruby's Cucumber a bit closer to their work, I rewrote it using Perl 5. Unfortunately I am really not a Perl programmer, and would love to get a code review and to hear suggestions from people who both know Perl and Cucumber. Hi Perl/Cucumber StackOverflow users - please help me create this "open source" attempt to re-create Cucumber for Perl! I would love to hear your comments and will accept any acceptable help. The minimal source code is here: http://github.com/kesor/p5-cucumber Thank you for your attention. For those not familiar with cucumber - please take just one small moment to take a look at this one small little page: http://wiki.github.com/aslakhellesoy/cucumber

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • Best way to unit test Collection?

    - by limc
    I'm just wondering how folks unit test and assert that the "expected" collection is the same/similar as the "actual" collection (order is not important). To perform this assertion, I wrote my simple assert API:- public void assertCollection(Collection<?> expectedCollection, Collection<?> actualCollection) { assertNotNull(expectedCollection); assertNotNull(actualCollection); assertEquals(expectedCollection.size(), actualCollection.size()); assertTrue(expectedCollection.containsAll(actualCollection)); assertTrue(actualCollection.containsAll(expectedCollection)); } Well, it works. It's pretty simple if I'm asserting just bunch of Integers or Strings. It can also be pretty painful if I'm trying to assert a collection of Hibernate domains, say for example. The collection.containsAll(..) relies on the equals(..) to perform the check, but I always override the equals(..) in my Hibernate domains to check only the business keys (which is the best practice stated in the Hibernate website) and not all the fields of that domain. Sure, it makes sense to check just against the business keys, but there are times I really want to make sure all the fields are correct, not just the business keys (for example, new data entry record). So, in this case, I can't mess around with the domain.equals(..) and it almost seems like I need to implement some comparators for just unit testing purposes instead of relying on collection.containsAll(..). Are there some testing libraries I could leverage here? How do you test your collection? Thanks.

    Read the article

  • Agile Testing Days 2012 – Day 1 – The birth of the #unicorn…

    - by Chris George
    Still riding the high from the tutorial day, I arrived at the conference venue eager to get cracking with the days talks. The opening Keynote was “Disciplined Agile Delivery: The Foundation for Scaling Agile” presented by Scott Ambler. The general ideas behind the methodology such as not re-inventing the wheel, and being goal driven, not prescriptive in how you work certainly struck chords with how we are trying to work in my team. Scott made some interesting observations about how scrum is quite prescriptive and is this really agile? I agreed with quite a few of his points on how what works for one team may not work for another. How a team works should be driven by context and reflection, not process and prescription. However was somewhat dubious about some of the statistics he rolled out towards the end. However, out of this keynote was born something that was to transcend this one presentation. During the talk, Scott mentioned on more than one occasion “In the real world”, and at one point made reference to people living in the land of unicorns and rainbows. The challenge was then laid down on twitter for all speakers to include a unicorn in their presentations… and for the most part this happened! It became an identity for this years conference, and I’m sure something that any attendee will always associate with Agile Testing Days 2012! Following this keynote, I attended “Going agile with Automated GUI Testing – Some personal insights” by Jan Zdunek from codecentric on the vendor track. My speciality is test automation, and in particular GUI testing, so this drew me to this talk more than the others. Thankfully, it was made clear from the very start that this was not peddling any particular product (even though it was on the vendor track), and Jan faithfully stuck to that. Most of the content was not new to me, but it was really comforting to hear someone else with very similar experiences to my own. In particular, things like how GUI testing is hard and is not a silver bullet; how record & replay is NOT a good thing to do (which drew a somewhat inflammatory tweet from an automation company when I tweeted that!). Something that I have started hearing around the place, and has certainly been murmuring at work is to push more of the automation coding onto the developers. After all they are the coding experts. I agree with this to a degree, but I personally enjoy coding and find it very rewarding doing so, therefore I’d be reluctant to give it up. I think there are some better alternatives such as pairing with a developer. Lastly, Jan mentioned, almost in passing, that we should consider virtualisation for gui testing for covering configuration combinations. On my project we’ve been running our win32/.NET GUI tests in cloud virtualisation for a couple of years now… I really should write about that! After lunch the second keynote of the day was by Lisa Crispin and Janet Gregory,”Myths about Agile Testing, De-Bunked”. It started off well… with the two ladies donning Medusa style head bands whilst they disbanding several myths about agile testing! I got the impression that it was perhaps not as slick as they would have liked, but then Janet was suffering with a very sore throat so kept losing her voice. Nevertheless, the presentation was captivating, and they debunked several myths such as : “Testing is dead”, “Testers must write code”, “Agile teams always deliver faster”. I didn’t take many notes for this because it was being recorded, but unfortunately the recordings have not been posted yet so I’ll write more about this when they are. The TestLab was held during a somewhat free for all time during most of the afternoon. It looked intriguing and proved to be one of the surprising experiences of the conference for me. Run by James Lyndsay and Bart Knaack, it consisted of a number of ‘stations’ that offered different testing problems. I opted for testing a mathematical drawing app call Geogebra, the task being to pair up and exploratory test it. After an allotted time, we discussed issues we’d found and decided if we wanted to continue ‘playing’ to which we all agreed! It was fun! The last track talk of the day was “Developers Exploratory Testing – Raising the bar” by Sigge Birgisson. One of the teams at Red Gate have tried Dev or Team exploratory testing a couple of times, and I was really interested to go to the presentation that prompted that. I was not disappointed! Sigge gave a first class presentation, and not only explained what DET was all about, but also how to go about implementing it. Little tips like calling it a ‘workshop’ rather than ‘testing’ I can really see working! Monday evening saw the presentation of the award for the Most Influential Agile Testing Professional Person go to a much deserved Lisa Crispin. The evening was great, with acrobatics, magic and music. My Takeaway Triple from Day 1:  Some of the cool stuff that was suggested in the GUI Testing talk, we are already doing. I should write about that! Testing is not dead! Perhaps testing will become more of a skill than a specific role, but it is certainly not dead. Team/Developer exploratory testing… seems like a no-brainer assuming you have a team who is willing.  Day 2 – Coming soon…

    Read the article

  • TOMORROW! UPK for Testing Webinar

    - by Karen Rihs
    UPK Webinar:  UPK for Testing September 13, 2012 10 am pacific / 1 pm eastern As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Join us for an OAUG-sponsored event on Sept 13th to hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing. Click Here to Register

    Read the article

  • TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE

    TELERIK LAUNCHES NEW AUTOMATED TESTING TOOLS PRODUCT LINE Merger with ArtOfTest repositions Telerik as a major player in the automated testing market Waltham, MA, April 13, 2010 Telerik, a leading provider of development tools and solutions for the Microsoft? .NET platform, today announced the launch of WebUI Test Studio 2010, an innovative and easy-to-use automated web-testing solution. Encompassing essential web technologies such as ASP.NET AJAX, Silverlight, and MVC, Teleriks WebUI Test Studio...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Are too many assertions code smell?

    - by Florents
    I've really fallen in love with unit testing and TDD - I am test infected. However, unit testing is used for public methods. Sometimes though I do have to test some assumptions-assertions in private methods too, because some of them are "dangerous" and refactoring can't help further. (I know, testing frameworks allo testing private methods). So, It became a habit of mine that (almost always) the first and the last line of a private method are both assertions. I guess this couldn't be bad (right ??). However, I've noticed that I also tend to use assertions in public methods too (as in the private) just "to be sure". Could this be "testing duplication" since the public method assumpotions are tested from the unit testng framework? Could someone think of too many assertions as a code smell?

    Read the article

  • Implementation of instance testing in Java, C++, C#

    - by Jake
    For curiosity purposes as well as understanding what they entail in a program, I'm curious as to how instance testing (instanceof/is/using dynamic_cast in c++) works. I've tried to google it (particularly for java) but the only pages that come up are tutorials on how to use the operator. How do the implementations vary across those langauges? How do they treat classes with identical signatures? Also, it's been drilled into my head that using instance testing is a mark of bad design. Why exactly is this? When is that applicable, instanceof should still be used in methods like .equals() and such right? I was also thinking of this in the context of exception handling, again particularly in Java. When you have mutliple catch statements, how does that work? Is that instance testing or is it just resolved during compilation where each thrown exception would go to?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >