Search Results

Search found 39118 results on 1565 pages for 'boost unit test framework'.

Page 31/1565 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • typedef boost::shared_ptr<MyJob> Ptr; or #define Ptr boost::shared_ptr

    - by danio
    I've just started wrking on a new codebase where each class contains a shared_ptr typedef (similar to this) like: typedef boost::shared_ptr<MyClass> Ptr; Is the only purpose to save typing boost::shared_ptr? If that is the case why not do #define Ptr boost::shared_ptr in one common header? Then you can do: Ptr<MyClass> myClass(new MyClass); which is no more typing than MyClass::Ptr myClass(new MyClass); and saves the Ptr definition in each class.

    Read the article

  • How to use boost::fusion::transform on heterogeneous containers?

    - by Kyle
    Boost.org's example given for fusion::transform is as follows: struct triple { typedef int result_type; int operator()(int t) const { return t * 3; }; }; // ... assert(transform(make_vector(1,2,3), triple()) == make_vector(3,6,9)); Yet I'm not "getting it." The vector in their example contains elements all of the same type, but a major point of using fusion is containers of heterogeneous types. What if they had used make_vector(1, 'a', "howdy") instead? int operator()(int t) would need to become template<typename T> T& operator()(T& const t) But how would I write the result_type? template<typename T> typedef T& result_type certainly isn't valid syntax, and it wouldn't make sense even if it was, because it's not tied to the function.

    Read the article

  • Simple and efficient distribution of C++/Boost source code (amalgamation)

    - by Arrieta
    Hello: My job mostly consists of engineering analysis, but I find myself distributing code more and more frequently among my colleagues. A big pain is that not every user is proficient in the intricacies of compiling source code, and I cannot distribute executables. I've been working with C++ using Boost, and the problem is that I cannot request every sysadmin of every network to install the libraries. Instead, I want to distribute a single source file (or as few as possible) so that the user can g++ source.c -o program. So, the question is: can you pack the Boost libraries with your code, and end up with a single file? I am talking about the Boost libraries which are "headers only" or "templates only". As an inspiration, please look at the distribution of SQlite or the Lemon Parser Generator; the author amalgamates the stuff into a single source file which is trivial to compile. Thank you.

    Read the article

  • HowTo parse numbers from string with BOOST methods?

    - by mosg
    Problem: Visual C++ 10 project (using MFC and Boost libraries). In one of my methods I'm reading simple test.txt file. Here is what inside of the file (std::string): 12 asdf789, 54,19 1000 nsfewer:22!13 Then I'm reading it and I have to convert all digits to int only with boost methods. For example, I have a list of different characters which I have to parse: ( ’ ' ) ( [ ], ( ), { }, ? ? ) ( : ) ( , ) ( ! ) ( . ) ( - ) ( ? ) ( ‘ ’, “ ”, « » ) ( ; ) ( / ) And after conversation I must have some kind of a massive of int's values, like this one: 12,789,54,19,1000,22,13 Maybe some one already did this job? PS. I'm new for boost. Thanks!

    Read the article

  • C++ Boost bind value type

    - by aaa
    hello. I look in documentation and source code but cannot figure out how to get return value type of boost bind functor. I am trying to accomplish following: 35 template<typename T,size_t N, class F> 36 boost::array<typename F::value_type, N> make_array(T (&input)[N], F unary) { 37 boost::array<typename F::value_type, N> array; 38 std::transform(input, input + N, array.begin(), unary); 39 return array; 40 } where F can be bind functor. the above does not work because functor does not have value_type. for that matter, is there standard interface for unary/binary functor as far as return value. Thanks

    Read the article

  • Problem with basic program using Boost Threads in c++

    - by Eternal Learner
    I have a simple program which creates and executes as thread using boost threads in c++. #include<boost/thread/thread.hpp> #include<iostream> void hello() { std::cout<<"Hello, i am a thread"<<std::endl; } int main() { boost::thread th1(&hello); th1.join(); } The compiler throws an error against the th1.join() line. It says " Multiple markers at this line - undefined reference to `boost::thread::join()' - undefined reference to `boost::thread::~thread()' "

    Read the article

  • thread destructors in C++0x vs boost

    - by Abruzzo Forte e Gentile
    Hi All These days I am reading the pdf Designing MT programs . It explains that the user MUST explicitly call detach() on an object of class std::thread in C++0x before that object gets out of scope. If you don't call it std::terminate() will be called and the application will die. I usually use boost::thread for threading in C++. Correct me if I am wrong but a boost::thread object detaches automatically when it get out of scope. Is seems to me that the boost approach follow a RAII principle and the std doesn't. Do you know if there is some particular reason for this? Kind Regards AFG

    Read the article

  • Linking error in OMNeT++ using Boost serialization library

    - by astriffe
    I'm very new to OMNeT++ and I'd like to use the serialization-library contained in the boost framework. However, when trying to use it, I get quite many errors such as: Description Resource Path Location Type undefined reference to `boost::archive::archive_exception::~archive_exception()' OmCCN line 36, external location: /home/alexander/UniBE/BT/simulator/boost-compiledLibs /include/boost/serialization/throw_exception.hpp C/C++ Problem . I guess the problem is that I didn't yet link the compiled library in OMNeT. I've had a look at the makefile but any changes there are worthless since it is generated automatically by makemake. Furthermore, trying to access the menu item 'makemake' in project properties OMNeT++ IDE throws an error (The currently displayed page contains invalid values). Can anyone give me a hint concerning what the error could cause or how to link the compiled library correctly? Any hints are very appreciated! cheers alex

    Read the article

  • Boost Shared Pointers and Memory Management

    - by Izza
    I began using boost rather recently and am impressed by the functionality and APIs provided. In using boost::shared_ptr, when I check the program with Valgrind, I found a considerable number of "Still reachable" memory leaks. As per the documentation of Valgrind, these are not a problem. However, since I used to use the standard C++ library only, I always made sure that any program written is completely free from memory leaks. My question is, are these memory leaks something to worry about? I tried using reset(), however it only decrements the reference count, doesn't deallocate memory. Can I safely ignore these, or any way to forcibly deallocate the memory allocated by boost::shared_ptr? Thank you.

    Read the article

  • C++ Boost bind value type {solved}

    - by aaa
    hello. I look in documentation and source code but cannot figure out how to get return value type of boost bind functor. I am trying to accomplish following: 35 template<typename T,size_t N, class F> 36 boost::array<typename F::value_type, N> make_array(T (&input)[N], F unary) { 37 boost::array<typename F::value_type, N> array; 38 std::transform(input, input + N, array.begin(), unary); 39 return array; 40 } where F can be bind functor. the above does not work because functor does not have value_type. for that matter, is there standard interface for unary/binary functor as far as return value. solution: it should be result_type. also equivalent defined are argument_type and first/second_argument_type for binary functions Thanks

    Read the article

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

  • Unit Testing in the real world

    - by Malfist
    I manage a rather large application (50k+ lines of code) by myself, and it manages some rather critical business actions. To describe the program simple, I would say it's a fancy UI with the ability to display and change data from the database, and it's managing around 1,000 rental units, and about 3k tenants and all the finances. When I make changes, because it's so large of a code base, I sometimes break something somewhere else. I typically test it by going though the stuff I changed at the functional level (i.e. I run the program and work through the UI), but I can't test for every situation. That is why I want to get started with unit testing. However, this isn't a true, three tier program with a database tier, a business tier, and a UI tier. A lot of the business logic is performed in the UI classes, and many things are done on events. To complicate things, everything is database driven, and I've not seen (so far) good suggestions on how to unit test database interactions. How would be a good way to get started with unit testing for this application. Keep in mind. I've never done unit testing or TDD before. Should I rewrite it to remove the business logic from the UI classes (a lot of work)? Or is there a better way?

    Read the article

  • uninitialized constant Test::Unit::TestResult::TestResultFailureSupport

    - by Vitaly Kushner
    I get the error in subj when I'm trying to run specs or generators in a fresh rails project. This happens when I add shoulda to the mix. I added the following in the config/environment.rb: config.gem 'rspec', :version => '1.2.6', :lib => false config.gem 'rspec-rails', :version => '1.2.6', :lib => false config.gem "thoughtbot-shoulda", :version => "2.10.2", :lib => 'shoulda', :source => "http://gems.github.com" I'm on OSX. ruby 1.8.6 (2008-08-11 patchlevel 287) gems 1.3.5 rails 2.3.4 rspec - 1.2.6 shoulda - 2.10.2 test-unit - 2.0.3 I'm aware of this and adding config.gem 'test-unit', :lib => 'test/unit' indeed solves the genrator problem as it doesn't throw an exception, but it prints 0 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications at the end of the run so I suppose it tries to run tests which is unexpected and undesired, also the specs stop to run at all, seems like rspec is not running at all, when running rake spec I get the test-unit output again (with 0 tests as there are only specs, no tests defined)

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • Get name of currently executing test in JUnit 4

    - by Dave Ray
    In JUnit 3, I could get the name of the currently running test like this: public class MyTest extends TestCase { public void testSomething() { System.out.println("Current test is " + getName()); ... } } which would print "Current test is testSomething". Is there any out-of-the-box or simple way to do this in JUnit 4? Background: Obviously, I don't want to just print the name of the test. I want to load test-specific data that is stored in a resource with the same name as the test. You know, convention over configuration and all that. Thanks!

    Read the article

  • VisualAssert Testing in C++, Loading a test fixture.

    - by C_Bevan
    Good day, I am learning Testing in Visual Studio C++ and I have several tutorials which I have followed. I am trying to load a test fixture. I have tried to put the test .cpp file in many different places but it will still not pick up on it when I click on "Run Tests" or "Run Tests without debugging" In the tutorials I found, they seemed to load into the Test Explorer automatically, but in mine is an icon with a X + (PROJECTNAME).EXE and when I hoover over it I get the process exited without registering with the agent... this is due to the model not containing any test fixtures... How can I load my tests into the Test Explorer...or register them with my project... I've tried right click and "Add Fixture...".... but that just starts a new test file and I have the same problem. Anybody know how I solve this issue?

    Read the article

  • Cocoa framework development: sharing between projects

    - by e.James
    I am currently developing a handful of similar Cocoa desktop apps. In an effort to share code between them, I have identified a set of core classes and functions that can be common across all of these applications. I would like to bundle this common code into a framework which all of my current applications (and any future ones) can link against. Now, here's the hard part: I'm going to be developing this framework as I go, so I need each of my desktop apps to have a reference to it, but I want to be able to edit the framework source code from within each of the app projects and have the framework automatically rebuilt as required. For example, let's say I have the Xcode project for DesktopAppNumberOne open, and I decide that one of my framework classes needs to be changed. I would like to: Open and edit the source file for that framework class without having to open the framework project in Xcode. Hit "build" on DesktopAppNumberOne, and see the framework rebuilt first (because one of its sources has changed), then see parts of DesktopAppNumberOne rebuilt (because one of the frameworks it links against has changed). I can see how to do this with only one app and one framework, but I'm having trouble figuring out how to do it with multiple apps that share a single framework. Has anyone had success with this approach? Am I perhaps going about this the wrong way? Any help would be appreciated.

    Read the article

  • How to build a test group in watir?

    - by karlthorwald
    I have some single watir.rb scripts that use IE and are written in a standard watir way. How do I create a test group that combines them? I want all of them be executed by executing the main script. Is it possible to auto include single test files into a test group by subidr? Is it possible to enumerate the files that should be included in the test group? Can I cascade (include other watir test groups in a watir test group)?

    Read the article

  • rake test fails

    - by Pavel K.
    i have a model (simplified) class Myfile < ActiveRecord::Base validates_attachment_size :body, :less_than => AdminOptions.first.max_file_size.megabytes end max_file_size is defined in AdminOptions fixture, but when i try to run "rake test", i get: /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/whiny_nil.rb:52:in `method_missing': undefined method `max_file_size' for nil:NilClass (NoMethodError) from /myapp/app/models/myfile.rb:1 if i run ruby test/unit/myfile_test.rb i get same mistake. if i run: RAILS_ENV=test rake db:load:fixtures ruby test/unit/myfile_test.rb tests execute properly. if i try to RAILS_ENV=test rake db:load:fixtures rake test it fails with same mistake. does anyone know how to fix that?

    Read the article

  • How to prevent unit test from using util from test project?

    - by calucier
    I am using eclipse and I have two projects, project1 and project1-test. Below is the example layout of my projects: project1 -src --my.package ----MyClass.java --my.package.util ----util.java project1-test -src --my.package ----MyClassTest.java --my.package.util ----util.java MyClass.java makes a static call to the util.java in project1. MyClassTests.java is testing MyClass.java. When the test class runs, it fails and complains that MyClass.java is referencing a method in util.java that doesn't exist. Under project1, the method being referenced exists in util.java but under project1-test, the method doesn't. When I run MyClassTests.java, the util.java that is being referenced from MyClass.java is from project1-test when it should be project1. Is there some way to make MyClass.java not reference util.java from project1-test when running MyClassTest.java?

    Read the article

  • Test Driven Development (TDD) in Visual Studio 2010- Microsoft Mondays

    - by Hosam Kamel
    November 14th , I will be presenting at Microsoft Mondays a session about Test Driven Development (TDD) in Visual Studio 2010 . Microsoft Mondays is program consisting of a series of Webcasts showcasing various Microsoft products and technologies. Each Monday we discuss a particular topic pertaining to development, infrastructure, Office tools, ERP, client/server operating systems etc. The webcast will be broadcast via Lync and can viewed from a web client. The idea behind the “Microsoft Mondays” program is to help you become more proficient in the products and technologies that you use and help you utilize their full potential.   Test Driven Development in Visual Studio 2010 Level – 300 (  Intermediate – Advanced ) Test Driven Development (TDD), also frequently referred to as Test Driven Design, is a development methodology where developers create software by first writing a unit test, then writing the actual system code to make the unit test pass.  The unit test can be viewed as a small specification around how the system should behave; writing it first helps the developer to focus on only writing enough code to make the test pass, thereby helping ensure a tight, lightweight system which is specifically focused meeting on the documented requirements. TDD follows a cadence of “Red, Green, Refactor.” Red refers to the visual display of a failing test – the test you write first will not pass because you have not yet written any code for it. Green refers to the step of writing just enough code in your system to make your unit test pass – your test runner’s UI will now show that test passing with a green icon. Refactor refers to the step of refactoring your code so it is tighter, cleaner, and more flexible. This cycle is repeated constantly throughout a TDD developer’s workday. Date:   November 14, 2011 Time:  10:00 a.m. – 11:00 a.m. (GMT+3)  http://www.eventbrite.com/event/2437620990/efbnen?ebtv=F   See you there! Hosam Kamel Originally posted at

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 2

    - by Tarun Arora
    Welcome back, in part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies. In this blog post I’ll get into the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. Tools => Options => Test Tools Have you visited the treasures of Visual Studio Menu bar tools => Options => Test Tools lately? The options to enable disable prompts on creating, editing, deleting or running manual/automated tests can be controller from here. The default test project language and default test types created on a new test project creation could be selected/unselected from here. Ever wondered how you can change the default limit of 25 test results, this can again be changed from here. If you record a lot of Web Tests and wish for the web test recorder to start with “that” URL populated, well this again can be specified from here. If you haven’t so far, I would urge you to spend 2 minutes in the test tools options.   Test Menu => Ready Steady Test Action! The Test tools are under the Test Menu in Visual Studio, apart from being able to create a new Test and Test List you can also load an existing vsmdi file. You can also manage your test controllers from here. A solution can have one or more test setting files, but there can only be one active test settings file at any time. Again, this selection can be done from here.  You can open the various test windows from under the windows option from the test menu. If you open the Test view window you will see that you have the option to group the tests by work items, project, test type, etc. You can set these properties by right clicking a test in the test list and choosing properties from the context menu.    So, what is a vsmdi file? vsmdi stands for Visual Studio Test Metadata File. Placed under the Solution Items this file keeps track of the list of unit tests in your solution. If you open the vsmdi file as an xml file you will see a series of Test Links nested with in the list Test List tags along with the Run Configuration tag. When in visual studio you run tests, the IDE looks at the vsmdi file to see what tests need to be run. You also have the option of using the vsmdi file in your team builds to specify which tests need to run as part of the build. Refer here for a walkthrough from a fellow blogger on how to use the vsmdi file in the team builds. Web Performance Test – The Truth! In Visual Studio 2010 “Web Tests” have been renamed to “Web Performance Tests”. Apart from renaming this test type there have been several improvements to this test type in visual studio 2010. I am very active on the MSDN Visual Studio And Load Testing forum and a frequent question from many users is “Do Web Tests support Pages that run JavaScript?” I will start with a little bit of background before answering this question. Web Performance Tests operate at the HTTP Layer, but why? To enable you to generate high loads with a relatively low amount of hardware, Web performance tests are driven at the protocol layer rather than instantiating a browser.The most common source of confusion is that users do not realize Web Performance Tests work at the HTTP layer. The tool adds to that misconception. After all, you record in IE, and when running a Web test you can select which browser to use, and then the result viewer shows the results in a browser window. So that means the tests run through the browser, right? NO! The Web test engine works at the HTTP layer, and does not instantiate a browser. What does that mean? In the diagram below, you can see there are no browsers running when the engine is sending and receiving requests. Does that mean I can’t test pages that use Java script? The best example for java script generating HTTP traffic is AJAX calls. The most common example of browser plugins are Silverlight or Flash. The Web test recorder will record HTTP traffic from AJAX calls and from most (but not all) browser plugins. This means you will still be able to web performance test pages that use java script or plugin and play back the results but the playback engine will not show the java script or plug in results in the ‘browser control’. If you want to test the page behaviour as a result of the java script or plug in consider using Coded UI Tests. This page looks like it failed, when in fact it succeeded! Looking closely at the response, and subsequent requests, it is clear the operation succeeded. As stated above, the reason why the browser control is pasting this message is because java script has been disabled in this control. So, to reiterate, the web performance test recorder: - Sends and receives data at the HTTP layer. - Does NOT run a browser. - Does NOT run java script. - Does NOT host ActiveX controls or plugins. There is a great series of blog posts from Ed Glas, i would highly recommend his blog to any one performing Load/Performance testing through Visual Studio. Demo – Web Performance Test [Demo] - Visual Studio Ultimate 2010: Test Settings and Configuration   [Demo]–Visual Studio Ultimate 2010: Web Performance Test   In this short video I try and answer the following questions, Why is performance Testing important? How does Visual Studio Help you performance Test your applications? How do i record a web performance test? How do make a web performance test data driven, transaction driven, loop driven, convert to code, add validations? Best practices for recording Web Performance Tests. I have a web performance test, what next? Creating the Web Performance Test was the first step towards load testing your application. Now that we have the base test we can test the page behaviour when N-users access the page. Have you ever had the head of business call you and mention that the marketing team has done a fantastic job and are expecting increased traffic on the web site, can the website survive the weekend with that additional load? This is the perfect opportunity to capacity test your application to see how your website holds up under various levels of load, you can work the results backwards to see how much hardware you may need to scale up your application to survive the weekend. Apart from that it is always a good idea to have some benchmarks around how the application performs under light loads for short duration, under heavy load for long duration and soak test the application run a constant load for a very week or two to record the effects of constant load for really long durations, this is a great way of identifying how your application handles the default IIS application pool reset which by default is configured to once every 25 hours. These bench marks will act as the perfect yard stick to measure performance gains when you start making improvements. BUT there are some best practices! => Goal Based Load Testing Approach Since the subject is vast and there are a lot of things to measure and analyse, … it is very easy to get distracted from the real goal!  You can optimize your application once you know where the pain points are. There is no point performing a load test of 5000 users if your intranet application will only have a 100 simultaneous users, it is important to keep focussed on the real goals of the project. So the idea is to have a user story around your load testing scenarios and test realistically. So it is recommended that you follow the below outline, It is an Iterative process, refine your objectives, identify the key scenarios, what is the expected workload, key metrics you want to report, record the web performance tests, simulate load and analyse results. Is your application already deployed in Production? This is great! You can analyse the IIS Logs to understand the user behaviour… But what are IIS LOGS? The IIS logs allow you to record events for each application and Web site on the Web server. You can create separate logs for each of your applications and Web sites. Logging information in IIS goes beyond the scope of the event logging or performance monitoring features provided by Windows. The IIS logs can include information, such as who has visited your site, what the visitor viewed, and when the information was last viewed. You can use the IIS logs to identify any attempts to gain unauthorized access to your Web server. How to configure IIS LOGS? For those Ninjas who already have IIS Logs configured (by the way its on by default) and need a way to analyse the IIS Logs, can use the Windows IIS Utility – Log Parser. Log Parser is a very powerful tool that provides a generic SQL-like language on top of many types of data like IIS Logs, Event Viewer entries, XML files, CSV files, File System and others; and it allows you to export the result of the queries to many output formats such as CSV, XML, SQL Server, Charts and others; and it works well with IIS 5, 6, 7 and 7.5. Frequently used Log Parser queries. Demo – Load Test [Demo]–Visual Studio Ultimate 2010: Load Testing   In this short video I try and answer the following questions, - Types of Performance Testing? - Perform Goal driven Load Testing, analyse Test Run Result and Generate a report? Recap A quick recap of what we have covered so far,     Thank you for taking the time out and reading this blog post, in part III of this blog series I’ll be getting into the details of Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, and the Asp.net Profiler. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. See you on in Part III   Share this post : CodeProject

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >