Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 44/1052 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Returning a mock object from a mock object

    - by Songo
    I'm trying to return an object when mocking a parser class. This is the test code using PHPUnit 3.7 //set up the result object that I want to be returned from the call to parse method $parserResult= new ParserResult(); $parserResult->setSegment('some string'); //set up the stub Parser object $stubParser=$this->getMock('Parser'); $stubParser->expects($this->any()) ->method('parse') ->will($this->returnValue($parserResult)); //injecting the stub to my client class $fileHeaderParser= new FileWriter($stubParser); $output=$fileParser->writeStringToFile(); Inside my writeStringToFile() method I'm using $parserResult like this: writeStringToFile(){ //Some code... $parserResult=$parser->parse(); $segment=$parserResult->getSegment();//that's why I set the segment in the test. } Should I mock ParserResult in the first place, so that the mock returns a mock? Is it good design for mocks to return mocks? Is there a better approach to do this all?!

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • Returning JsonResult From ASP.NET MVC 2.0 Controller and Unit Testing

    This post will show how to return a simple Json result from an ASP.NET MVC 2.0 web project.  It will show how to test that result inside a unit test and essentially pick apart the Json, just... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • migrating product and team from startup race to quality development

    - by thevikas
    This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases "test-driven-development" and "continuous-integration" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! how to plan for creating units for all backlog production code

    Read the article

  • Area of testing

    - by ?????? ??????????
    I'm trying to understand which part of my code I should to test. I have some code. Below is example of this code, just to understand the idea. Depends of some parametrs I put one or another currency to "Event" and return his serialization in the controller. Which part of code I should to test? Just the final serialization, or only "Event" or every method: getJson, getRows, fillCurrency, setCurrency? class Controller { public function getJson() { $rows = $eventManager->getRows(); return new JsonResponse($rows); } } class EventManager { public function getRows() { //some code here if ($parameter == true) { $this->fillCurrency($event, $currency); } } public function fillCurrency($event, $currency) { //some code here if ($parameters == true) { $event->setCurrency($currency); } } } class Event { public function setCurrency($currency) { $this->updatedAt = new Datetime(); $this->currency = $currency; } }

    Read the article

  • How to organize unit/integration test in BDD

    - by whatf
    So finally after reading a lot, I have understood that the difference between BDD and TDD is between T & B. But coming from basic TDD background, what I used to was, first write unittest for database models write test for views (at this point start with integration test as well, along with unittests) write more integration tests for testing UI stuff. What would be a correct way to approach BDD. Say I have a simple blog application. Given : When a user logs in. He should be shown list of all his posts. But for this, I need a model with a row user, another row blog posts. So how do we go about writing tests? when do we create fixtures? When do we write integration (selenium) tests?

    Read the article

  • MbUnit (gallio) and Visual Studio.Net Tests Not Completing or Debugging

    - by Davy
    Hi I'm using Gallio\MbUnit 3.1 with ReSharper and Visual Studio 2008. Everything is working well except this type of test: [Test] [Row("test@badEmail@_test.com")] [Row("test@badEmail@_test.")] public void IsValidEmail_Invalid_Emails_Should_Return_False(string invalidEmail) { Assert.IsFalse(AppHelper.IsValidEmail(invalidEmail), "Email validation failed for " + invalidEmail); } The test doesn't complete or go in to debug mode only when I pass in a parameter E.g. 'string invalidEmail'. If I remove that prameter it seems to work normally. It will run the test if I have: [Test] public void IsValidEmail_Invalid_Emails_Should_Return_False() { var invalidName = test@badEmail@_test.com"; Assert.IsFalse(AppHelper.IsValidEmail(invalidEmail), "Email validation failed for " + invalidEmail); } I appreciate that there may be better ways to achieve this test but I'm trying to work my way through a book and this is how it's explaining things. Any help is appreciated. Davy

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • How has test first development changed the way you write software?

    - by Toran Billups
    I've started to find that I can't write software without writing a test first. I ask this subjective question because I want to hear what others in the community think about the reasons I can't go back to writing production code without a test first. If you can't write a test for something you don't understand it Without a regression test you can't clean the code You are going to test it anyway, spend the time to do it right Evolutionary design is possible without fear You actually write less code yourself Fast feedback cycles save time and money Job security (less bugs makes your boss happy) It actually makes my work more enjoyable

    Read the article

  • Unit tests logged (or run) multiple times

    - by HeavyWave
    I have this simple test: protected readonly ILog logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().ReflectedType); private static int count = 0; [Test] public void TestConfiguredSuccessfully() { logger.Debug("in test method" + count++); } log4net is set up like this: [TestFixtureSetUp] public void SetUp() { log4net.Config.BasicConfigurator.Configure(); } The problem is, that if I run this test in nUnit once, I get the output (as expected): 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method0 But if I press RUN in nUnit.exe again (or more) I get the following: 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 And so on (if I run it 5 times, I'll get 5 repeating lines). Now, if I run the same test alone from reSharper the output is fine and does not repeat. However, if I run this test along side 2 other tests in the same class, the output is repeated three times. I am totally confused. What the hell is going on here?

    Read the article

  • AgUnit - Silverlight unit testing with ReSharper

    - by koevoeter
    If you’re a ReSharper user and Silverlight 4 developer you’ll probably like this add-in that two of my co-workers created. It lets you run your Silverlight unit tests inside the Visual Studio IDE. You can get the add-in from Codeplex: http://agunit.codeplex.com/ (requires ReSharper 5 and Silverlight 4)

    Read the article

  • Should we exclude code for the code coverage analysis?

    - by romaintaz
    I'm working on several applications, mainly legacy ones. Currently, their code coverage is quite low: generally between 10 and 50%. Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo). Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested. Also, when they work on unit test for a complex class, the benefits - purely in term of code coverage - will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible... The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable. However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort. Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions? In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low. What is your opinion on that subject? How do you do on your projects? Thanks. (1) These layers are generally related to the UI / Java beans, etc. (2) I know that's not true. In fact, it only means that 40% of my code base

    Read the article

  • Link between tests and user stories

    - by Sardathrion
    I have not see these links explicitly stated in the Agile literature I have read. So, I was wondering if this approach was correct: Let a story be defined as "In order to [RESULT], [ROLE] needs to [ACTION]" then RESULT generates system tests. ROLE generates acceptance tests. ACTION generates component and unit tests. Where the definitions are the ones used in xUnit Patterns which to be fair are fairly standard. Is this a correct interpretation or did I misunderstand something?

    Read the article

  • When you should and should not use the 'new' keyword?

    - by skizeey
    I watched a Google Tech Talk presentation on Unit Testing, given by Misko Hevery, and he said to avoid using the new keyword in business logic code. I wrote a program, and I did end up using the new keyword here and there, but they were mostly for instantiating objects that hold data (ie, they didn't have any functions or methods). I'm wondering, did I do something wrong when I used the new keyword for my program. And where can we break that 'rule'?

    Read the article

  • ISO 12207: Verification of integration and Unit test validation

    - by user970696
    I have received comments from the supervisor reviewing my thesis. He asked two questions I cannot answer right now: If ISO 12207 says under "Integration verification" that it "checks that components are correctly and completely integrated into a system", how this can be verified without testing, if all testing is validation? How without testing can I know that system is integrated correctly and fully? If unit testing is validation, how does it match the ISO definiton of validation "that requirements for intended use were fulfilled" if its so low level?

    Read the article

  • Economist Intelligence Unit to Present Preliminary Survey Findings at OHUG

    - by Jay Richey, HCM Product Marketing
    Oracle and IBM are sponsoring a luncheon at OHUG in Las Vegas for an exclusive preview of the forthcoming C-level perspectives of HR function: An Economist Intelligence Unit research program sponsored by IBM and Oracle. Speaking will be Economist Editor, Thought Leadership, Gilda Stahl, who will provide a preview of the study's findings and insights into whether CHROs are playing a central role in aligning companies' talent strategies with long term business goals, and how technology innovation can help Seating for this event is limited. Please register asap. http://www.oraclepartnerevent.com/2012/c-level-perspectives/

    Read the article

  • .NET processing unit [closed]

    - by configurator
    Do you think we'll ever see an IL (or other bytecode) processing unit? It sounds possible and would have a major benefit, because we wouldn't need the JITter. This isn't the same as compiling .NET directly to machine code, since the bytecode here is designed to be programmed and disassembled easily, unlike the bytecode used in x86 processors which is designed to work faster. What's stopping Intel (for example) from partnering with Microsoft and making such a .NET-optimised processor?

    Read the article

  • What should i do to test EasyMock objects when using Generics ? EasyMock

    - by Arthur Ronald F D Garcia
    See code just bellow Our generic interface public interface Repository<INSTANCE_CLASS, INSTANCE_ID_CLASS> { void add(INSTANCE_CLASS instance); INSTANCE_CLASS getById(INSTANCE_ID_CLASS id); } And a single class public class Order { private Integer id; private Integer orderNumber; // getter's and setter's public void equals(Object o) { if(o == null) return false; if(!(o instanceof Order)) return false; // business key if(getOrderNumber() == null) return false; final Order other = (Order) o; if(!(getOrderNumber().equals(other.getOrderNumber()))) return false; return true; } // hashcode } And when i do the following test private Repository<Order, Integer> repository; @Before public void setUp { repository = EasyMock.createMock(Repository.class); Order order = new Order(); order.setOrderNumber(new Integer(1)); repository.add(order); EasyMock.expectLasCall().once(); EasyMock.replay(repository); } @Test public void addOrder() { Order order = new Order(); order.setOrderNumber(new Integer(1)); repository.add(order); EasyMock.verify(repository) } I get Unexpected method call add(br.com.smac.model.domain.Order@ac66b62): add(br.com.smac.model.domain.Order@ac66b62): expected: 1, actual: 0 Why does it not work as expected ??? What should i do to pass the test ???

    Read the article

  • How test that ASP.NET MVC route redirects to other site?

    - by Matt Lacey
    Due to a prinitng error in some promotional material I have a site that is receiving a lot of requests which should be for one site arriving at another. i.e. The valid sites are http://site1.com/abc & http://site2.com/def but people are being told to go to http://site1.com/def. I have control over site1 but not site2. site1 contains logic for checking that the first part of the route is valid in an actionfilter, like this: public override void OnActionExecuting(ActionExecutingContext filterContext) { base.OnActionExecuting(filterContext); if ((!filterContext.ActionParameters.ContainsKey("id")) || (!manager.WhiteLabelExists(filterContext.ActionParameters["id"].ToString()))) { if (filterContext.ActionParameters["id"].ToString().ToLowerInvariant().Equals("def")) { filterContext.HttpContext.Response.Redirect("http://site2.com/def", true); } filterContext.Result = new ViewResult { ViewName = "NoWhiteLabel" }; filterContext.HttpContext.Response.Clear(); } } I'm not sure how to test the redirection to the other site though. I already have tests for redirecting to "NoWhiteLabel" using the MvcContrib Test Helpers, but these aren't able to handle (as far as I can see) this situation. How do I test the redirection to antoher site?

    Read the article

  • How to test a struts 2.1.x developer?

    - by Jason Pyeron
    We employ test to filter out those who can't. The tests are designed to be very low effort for those who can and too much effort for those who can't. Here is an example for java web application developer on an Oracle project: We only work with contractors who can use the tools we use, to determine if you can use the tools we have devised some very simple tests. Instructions If you are prepared and knowledgeable this will take you about 2-5 minutes. Suggested knowledge and tools: * subversion 1.6 see http://subversion.tigris.org/ or http://cygwin.com/setup.exe * java 1.6 see http://java.sun.com/javase/downloads/index.jsp * oracle >=10g see http://www.oracle.com/technology/software/products/jdev/index.html * j2ee server see http://tomcat.apache.org/download-55.cgi or http://www.jboss.org/jbossas/downloads/ Steps 1. check out svn://statics32.pdinc.us/home/subversion/guest 2. deploy the war file found at trunk/test.war 3. browse to the web application you installed from the war file and answer the one SQL question: How many rows are in the table 'testdata' where column 'value' ends with either an 'A' or an 'a'? The login credentials are in trunk/doc/oracle.txt 4. make a RESULTS HASH by submitting your answer to the form. 5. create a file in tmp/YourUserName.txt and put the RESULTS HASH in it, not the answer. 6. check in your file (don't forget to add the file first). 7. message me with the revision number of your check in. As such I am looking for ideas on how to test for someone to be a struts 2.1 w/ annotations.

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >