Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 84/404 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • How can I get 100% test coverage in a Perl module that uses DBI?

    - by BrianH
    I am a bit new to the Devel::Cover module, but have found it very useful in making sure I am not missing tests. A problem I am running into is understanding the report from Devel::Cover. I've looked at the documentation, but can't figure out what I need to test to get 100% coverage. Here is the output from the cover report: line err stmt bran cond sub pod time code ... 36 sub connect_database { 37 3 3 1 1126 my $self = shift; 38 3 100 24 if ( !$self->{dsn} ) { 39 1 7 croak 'dsn not supplied - cannot connect'; 40 } 41 *** 2 33 21 $self->{dbh} = DBI->connect( $self->{dsn}, q{}, q{} ) 42 || croak "$DBI::errstr"; 43 1 11 return $self; 44 } ... line err % l !l&&r !l&&!r expr ----- --- ------ ------ ------ ------ ---- 41 *** 33 1 0 0 'DBI'->connect($$self{'dsn'}, '', '') || croak("$DBI::errstr") And here is and example of my code that tests this specific line: my $database = MyModule::Database->new( { dsn => 'Invalid DSN' }); throws_ok( sub { $database->connect_database() }, qr/Can't connect to data source/, 'Test connection exception (invalid dsn)' ); This test passes - the connect does throw an error and fulfills my "throws_ok" test. I do have some tests that test for a successful connection, which is why I think I have 33% coverage, but if I'm reading it correctly, cover thinks I am not testing the "|| croak" part of the statement. I thought I was, with the "throws_ok" test, but obviously I am missing something. Does anyone have advice on how I can test my DBI-connect line successfully? Thanks!

    Read the article

  • What would be a better implementation of shared variable among subclass

    - by Churk
    So currently I have a spring unit testing application. And it requires me to get a session cookie from a foreign authentication source. Problem what that is, this authentication process is fairly expensive and time consuming, and I am trying to create a structure where I am authenticate once, by any subclass, and any subsequent subclass is created, it will reuse this session cookie without hitting the authentication process again. My problem right now is, the static cookie is null each time another subclass is created. And I been reading that using static as a global variable is a bad idea, but I couldn't think of another way to do this because of Spring framework setting things during run time and how I would set the cookie so that all other classes can use it. Another piece of information. The variable is being use, but is change able during run time. It is not a single user being signed in and used across the board. But more like a Sub1 would call login, and we have a cookie. Then multiple test will be using that login until SubX will come in and say, I am using different credential, so I need to login as something else. And repeats. Here is a outline of my code: public class Parent implements InitializingBean { protected static String BASE_URL; public static Cookie cookie; ... All default InitializingBean methods ... afterPropertiesSet() { cookie = // login process returns a cookie } } public class Sub1 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomething() { // Do something with cookie, because it should have been set by parent class } } public class Sub2 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomethingElse() { // Do something with cookie, because it should have been set by parent class } }

    Read the article

  • How to rewrite data-driven test suites of JUnit 3 in Junit 4?

    - by rics
    I am using data-driven test suites running JUnit 3 based on Rainsberger's JUnit Recipes. The purpose of these tests is to check whether a certain function is properly implemented related to a set of input-output pairs. Here is the definition of the test suite: public static Test suite() throws Exception { TestSuite suite = new TestSuite(); Calendar calendar = GregorianCalendar.getInstance(); calendar.set(2009, 8, 05, 13, 23); // 2009. 09. 05. 13:23 java.sql.Date date = new java.sql.Date(calendar.getTime().getTime()); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.YYYY_MON_DD, "2009-SEP-05")); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.DD_MON_YYYY, "05/SEP/2009")); return suite; } and the definition of the testing class: public class DateFormatTestToString extends TestCase { private java.sql.Date date; private JtDateFormat.FormatType dateFormat; private String expectedStringFormat; public DateFormatTestToString(java.sql.Date date, JtDateFormat.FormatType dateFormat, String expectedStringFormat) { super("testGetString"); this.date = date; this.dateFormat = dateFormat; this.expectedStringFormat = expectedStringFormat; } public void testGetString() { String result = JtDateFormat.getString(date, dateFormat); assertTrue( expectedStringFormat.equalsIgnoreCase(result)); } } How is it possible to test several input-output parameters of a method using JUnit 4? This question and the answers explained to me the distinction between JUnit 3 and 4 in this regard. This question and the answers describe the way to create test suite for a set of class but not for a method with a set of different parameters.

    Read the article

  • Creating mock Objects in PHP unit

    - by Mike
    Hi, I've searched but can't quite find what I'm looking for and the manual isn't much help in this respect. I'm fairly new to unit testing, so not sure if I'm on the right track at all. Anyway, onto the question. I have a class: <?php class testClass { public function doSomething($array_of_stuff) { return AnotherClass::returnRandomElement($array_of_stuff); } } ?> Now, clearly I want the AnotherClass::returnRandomElement($array_of_stuff); to return the same thing every time. My question is, in my unit test, how do I mockup this object? I've tried adding the AnotherClass to the top of the test file, but when I want to test AnotherClass I get the "Cannot redeclare class" error. I think I understand factory classes, but I'm not sure how I would apply that in this instance. Would I need to write an entirely seperate AnotherClass class which contained test data and then use the Factory class to load that instead of the real AnotherClass? Or is using the Factory pattern just a red herring. I tried this: $RedirectUtils_stub = $this->getMockForAbstractClass('RedirectUtils'); $o1 = new stdClass(); $o1->id = 2; $o1->test_id = 2; $o1->weight = 60; $o1->data = "http://www.google.com/?ffdfd=fdfdfdfd?route=1"; $RedirectUtils_stub->expects($this->any()) ->method('chooseRandomRoot') ->will($this->returnValue($o1)); $RedirectUtils_stub->expects($this->any()) ->method('decodeQueryString') ->will($this->returnValue(array())); in the setUp() function, but these stubs are ignored and I can't work out whether it's something I'm doing wrong, or the way I'm accessing the AnotherClass methods. Help! This is driving me nuts.

    Read the article

  • A good way to write unit tests

    - by bobobobo
    So, I previously wasn't really in the practice of writing unit tests - now I kind of am and I need to check if I'm on the right track. Say you have a class that deals with math computations. class Vector3 { public: // Yes, public. float x,y,z ; // ... ctors ... } ; Vector3 operator+( const Vector3& a, const Vector3 &b ) { return Vector3( a.x + b.y /* oops!! hence the need for unit testing.. */, a.y + b.y, a.z + b.z ) ; } There are 2 ways I can really think of to do a unit test on a Vector class: 1) Hand-solve some problems, then hard code the numbers into the unit test and pass only if equal to your hand and hard-coded result bool UnitTest_ClassVector3_operatorPlus() { Vector3 a( 2, 3, 4 ) ; Vector3 b( 5, 6, 7 ) ; Vector3 result = a + b ; // "expected" is computed outside of computer, and // hard coded here. For more complicated operations like // arbitrary axis rotation this takes a bit of paperwork, // but only the final result will ever be entered here. Vector3 expected( 7, 9, 11 ) ; if( result.isNear( expected ) ) return PASS ; else return FAIL ; } 2) Rewrite the computation code very carefully inside the unit test. bool UnitTest_ClassVector3_operatorPlus() { Vector3 a( 2, 3, 4 ) ; Vector3 b( 5, 6, 7 ) ; Vector3 result = a + b ; // "expected" is computed HERE. This // means all you've done is coded the // same thing twice, hopefully not having // repeated the same mistake again Vector3 expected( 2 + 5, 6 + 3, 4 + 7 ) ; if( result.isNear( expected ) ) return PASS ; else return FAIL ; } Or is there another way to do something like this?

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Rails Functional test assert_select javascript respond_to

    - by Macint
    Hello, I am currently trying to write functional tests for a charging form which gets loaded on to the page via AJAX(jQuery). It loads the form from the charge_form action which returns the consult_form.js.erb view. This all works, but I am having trouble with my testing. In the functional I can go to the action but I cannot use assert_select to find a an element and verify that the form is in fact there. Error: 1) Failure: test_should_create_new_consult(ConsultsControllerTest) [/test/functional/consults_controller_test.rb:8]: Expected at least 1 element matching "h4", found 0. <false> is not true. This is the view. consult_form.js.erb: <div id="charging_form"> <h4>Charging form</h4> <div class="left" id="charge_selection"> <%= select_tag("select_category", options_from_collection_for_select(@categories, :id, :name)) %><br/> ... consults_controller_test.rb: require 'test_helper' class ConsultsControllerTest < ActionController::TestCase def test_should_create_new_consult get_with_user :charge_form, :animal_id => animals(:one), :id => consults(:one), :format => 'js' assert_response :success assert_select 'h4', "Charging form" #can't find h4 end end Is there a problem with using assert_select with types other than html? Thank you for any help!

    Read the article

  • Mock implementations in C++

    - by forneo
    Hi guys, I need a mock implementation of a class - for testing purposes - and I'm wondering how I should best go about doing that. I can think of two general ways: Create an interface that contains all public functions of the class as pure virtual functions, then create a mock class by deriving from it. Mark all functions (well, at least all that are to be mocked) as virtual. I'm used to doing it the first way in Java, and it's quite common too (probably since they have a dedicated interface type). But I've hardly ever seen such interface-heavy designs in C++, thus I'm wondering. The second way will probably work, but I can't help but think of it as kind of ugly. Is anybody doing that? If I follow the first way, I need some naming assistance. I have an audio system that is responsible for loading sound files and playing the loaded tracks. I'm using OpenAL for that, thus I've called the interface "Audio" and the implementation "OpenALAudio". However, this implies that all OpenAL-specific code has to go into that class, which feels kind of limiting. An alternative would be to leave the class' name "Audio" and find a different one for the interface, e.g. "AudioInterface" or "IAudio". Which would you suggest, and why?

    Read the article

  • How do I grab hold of a pop-up that is opened from a frame?

    - by KLA
    I am testing a website using WatiN. On one of the pages I get a "report" in an Iframe, within this I frame there is a link to download and save the report. But since the only way to get to the link is to use frame.Link(...) the pop-up closes immediately after opening; Code snippet below //Click the create graph button ie.Button(Find.ById("ctl00_ctl00_ContentPlaceHolder1_TopBoxContentPlaceHolder_btnCreateGraph")).Click(); //Lets export the data ie.Div(Find.ById("colorbox")); ie.Div(Find.ById("cboxContent")); ie.Div(Find.ById("cboxLoadedContent")); Thread.Sleep(1000);//Used to cover performance issues Frame frame = ie.Frame(Find.ByName(frameNameRegex)); for (int Count = 0; Count < 10000000; Count++) {double nothing = (Count/12); }//Do nothing I just need a short pause //SelectList waits for a postback which does not occur. try { frame.SelectList(Find.ById("rvReport_ctl01_ctl05_ctl00")).SelectByValue("Excel"); } catch (Exception) { //Do nothing } //Now click export frame.Link(Find.ById("rvReport_ctl01_ctl05_ctl01")).ClickNoWait(); IE ieNewBrowserWindow = IE.AttachTo<IE>(Find.ByUrl(urlRegex)); fileDownloadHandler.WaitUntilFileDownloadDialogIsHandled(150); fileDownloadHandler.WaitUntilDownloadCompleted(200); I have tried using ie instead of frame which is why all those ie.Div's are present. if I use frame the pop-up window opens and closes instantly. If I use ie I get a link not found error. If I click on the link manually, while the test is "trying to find the link" the file will download correctly.

    Read the article

  • Rail test case fixtures not loading

    - by Deano
    Rails appears to not be loading any fixtures for unit or functional tests. I have a simple 'products.yml' that parses and appears correct: ruby: title: Programming Ruby 1.9 description: Ruby is the fastest growing and most exciting dynamic language out there. If you need to get working programs delivered fast, you should add Ruby to your toolbox. price: 49.50 image_url: ruby.png My controller functional test begins with: require 'test_helper' class ProductsControllerTest < ActionController::TestCase fixtures :products setup do @product = products(:one) @update = { :title => 'Lorem Ipsum' , :description => 'Wibbles are fun!' , :image_url => 'lorem.jpg' , :price => 19.95 } end According to the book, Rails should "magically" load the fixtures (as my test_helper.rb has fixtures :all in it. I also added the explicit fixtures load (seen above). Yes Rails complains: user @ host ~/Dropbox/Rails/depot > rake test:functionals (in /Somewhere/Users/user/Dropbox/Rails/depot) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/functional/products_controller_test.rb" Loaded suite /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader Started EEEEEEE Finished in 0.062506 seconds. 1) Error: test_should_create_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 2) Error: test_should_destroy_product(ProductsControllerTest): NoMethodError: undefined method `products' for ProductsControllerTest:Class /test/functional/products_controller_test.rb:7 ... I did come across the other Rails test fixture question http://stackoverflow.com/questions/1547634/rails-unit-testing-doesnt-load-fixtures, but that leads to a plugin issue (something to do with the order of loading fixtures). BTW, I am developing on Mac OS X 10.6 with Rail 2.3.5 and Ruby 1.8.7, no additional plugins (beyond the base install). Any pointers on how to debug, why the magic of Rails appears to be failing here? Is it a version problem? Can I trace code into the libraries and find the answer? There are so many "mixin" modules I can't find where the fixtures method really lives.

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • Mocking objects with complex Lambda Expressions as parameters

    - by iCe
    Hi there, I´m encountering this problem trying to mock some objects that receive complex lambda expressions in my projects. Mostly with with proxy objects that receive this type of delegate: Func<Tobj, Fun<TParam1, TParam2, TResult>> I have tried to use Moq as well as RhinoMocks to acomplish mocking those types of objects, however both fail. (Moq fails with NotSupportedException, and in RhinoMocks simpy does not satisgy expectation). This is simplified example of what I´m trying to do: I have a Calculator object that does calculations: public class Calculator { public Calculator() { } public int Add(int x, int y) { var result = x + y; return result; } public int Substract(int x, int y) { var result = x - y; return result; } } I need to validate parameters on every method in the Calculator class, so to keep with the Single Responsability principle, I create a validator class. I wire everything up using a Proxy class, that prevents having duplicate code: public class CalculatorProxy : CalculatorExample.ICalculatorProxy { private ILimitsValidator _validator; public CalculatorProxy(Calculator _calc, ILimitsValidator _validator) { this.Calculator = _calc; this._validator = _validator; } public int Operation(Func&lt;Calculator, Func&lt;int, int, int&gt;&gt; operation, int x, int y) { _validator.ValidateArgs(x, y); var calcMethod = operation(this.Calculator); var result = calcMethod(x, y); _validator.ValidateResult(result); return result; } public Calculator Calculator { get; private set; } } Now, I´m testing a component that does use the CalculatorProxy, so I want to mock it, for example using Rhino Mocks: [TestMethod] public void ParserWorksWithCalcultaroProxy() { var calculatorProxyMock = MockRepository.GenerateMock&lt;ICalculatorProxy&gt;(); calculatorProxyMock.Expect(x =&gt; x.Calculator).Return(_calculator); calculatorProxyMock.Expect(x =&gt; x.Operation(c =&gt; c.Add, 2, 2)).Return(4); var mathParser = new MathParser(calculatorProxyMock); mathParser.ProcessExpression("2 + 2"); calculatorProxyMock.VerifyAllExpectations(); } However I cannot get it to work! Any ideas about how this can be done? Thanks a lot!

    Read the article

  • Help needed wit the XPath statement for Selenium test

    - by mgeorge
    I am testing a calendar component using selenium.In my test i want to click on the current date.Please help me with the XPath statement for doing that.I am adding the HTML for the calender component <input id="event_date" type="text" on="click then l:show.event.calendar" style="border: 1px solid rgb(187, 187, 187); width: 100px;" fieldset="new_event" decorator="redbox" validator="date"/> <img id="app_136" style="position: relative; top: 2px;" on="click then l:show.event.calendar" src="images/calendar.png"/> <div id="app_137" style="margin: 0pt; padding: 0pt;"> <div id="app_calendar_2" class="yui-calcontainer single withtitle" style="position: absolute; z-index: 1000;"> <div class="title">Select Event Date</div> <table id="app_calendar_2_cal" class="yui-calendar y2010" cellspacing="0"> <thead> <tr> </tr> <tr class="calweekdayrow"> <th class="calweekdaycell">Su</th> <th class="calweekdaycell">Mo</th> <th class="calweekdaycell">Tu</th> <th class="calweekdaycell">We</th> <th class="calweekdaycell">Th</th> <th class="calweekdaycell">Fr</th> <th class="calweekdaycell">Sa</th> </tr> </thead> <tbody class="m6 calbody"> <tr class="w22"> <td id="app_calendar_2_cal_cell0" class="calcell oom calcelltop calcellleft">30</td> <td id="app_calendar_2_cal_cell1" class="calcell oom calcelltop">31</td> <td id="app_calendar_2_cal_cell2" class="calcell wd2 d1 selectable calcelltop"> </td> <td id="app_calendar_2_cal_cell3" class="calcell wd3 d2 today selectable calcelltop selected"> <a class="selector" href="#">2</a> </td> I want to click the date component described in <td id="app_calendar_2_cal_cell3" class="calcell wd3 d2 today selectable calcelltop selected"> <a class="selector" href="#">2</a> </td> Thanks in advance mgeorge

    Read the article

  • Hide public method used to help test a .NET assembly

    - by ChrisW
    I have a .NET assembly, to be released. Its release build includes: A public, documented API of methods which people are supposed to use A public but undocumented API of other methods, which exist only in order to help test the assembly, and which people are not supposed to use The assembly to be released is a custom control, not an application. To regression-test it, I run it in a testing framework/application, which uses (in addition to the public/documented API) some advanced/undocumented methods which are exported from the control. For the public methods which I don't want people to use, I excluded them from the documentation using the <exclude> tag (supported by the Sandcastle Help File Builder), and the [EditorBrowsable] attribute, for example like this: /// <summary> /// Gets a <see cref="IEditorTransaction"/> instance, which helps /// to combine several DOM edits into a single transaction, which /// can be undone and redone as if they were a single, atomic operation. /// </summary> /// <returns>A <see cref="IEditorTransaction"/> instance.</returns> IEditorTransaction createEditorTransaction(); /// <exclude/> [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); This successfully removes the method from the API documentation, and from Intellisense. However, if in a sample application program I right-click on an instance of the interface to see its definition in the metadata, I can still see the method, and the [EditorBrowsable] attribute as well, for example: // Summary: // Gets a ModelText.ModelDom.Nodes.IEditorTransaction instance, which helps // to combine several DOM edits into a single transaction, which can be undone // and redone as if they were a single, atomic operation. // // Returns: // A ModelText.ModelDom.Nodes.IEditorTransaction instance. IEditorTransaction createEditorTransaction(); // [EditorBrowsable(EditorBrowsableState.Never)] void debugDumpBlocks(TextWriter output); Questions: Is there a way to hide a public method, even from the meta data? If not then instead, for this scenario, would you recommend making the methods internal and using the InternalsVisibleTo attribute? Or would you recommend some other way, and if so what and why? Thank you.

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • TestNG - Factories and Dataproviders

    - by Tim K
    Background Story I'm working at a software firm developing a test automation framework to replace our old spaghetti tangled system. Since our system requires a login for almost everything we do, I decided it would be best to use @BeforeMethod, @DataProvider, and @Factory to setup my tests. However, I've run into some issues. Sample Test Case Lets say the software system is a baseball team roster. We want to test to make sure a user can search for a team member by name. (Note: I'm aware that BeforeMethods don't run in any given order -- assume that's been taken care of for now.) @BeforeMethod public void setupSelenium() { // login with username & password // acknowledge announcements // navigate to search page } @Test(dataProvider="players") public void testSearch(String playerName, String searchTerm) { // search for "searchTerm" // browse through results // pass if we find playerName // fail (Didn't find the player) } This test case assumes the following: The user has already logged on (in a BeforeMethod, most likely) The user has already navigated to the search page (trivial, before method) The parameters to the test are associated with the aforementioned login The Problems So lets try and figure out how to handle the parameters for the test case. Idea #1 This method allows us to associate dataproviders with usernames, and lets us use multiple users for any specific test case! @Test(dataProvider="players") public void testSearch(String user, String pass, String name, String search) { // login with user/pass // acknowledge announcements // navigate to search page // ... } ...but there's lots of repetition, as we have to make EVERY function accept two extra parameters. Not to mention, we're also testing the acknowledge announcements feature, which we don't actually want to test. Idea #2 So lets use the factory to initialize things properly! class BaseTestCase { public BaseTestCase(String user, String password, Object[][] data); } class SomeTest { @Factory public void ... } With this, we end up having to write one factory per test case... Although, it does let us have multiple users per test-case. Conclusion I'm about fresh out of ideas. There was another idea I had where I was loading data from an XML file, and then calling the methods from a program... but its getting silly. Any ideas?

    Read the article

  • JUnit for Functions with Void Return Values

    - by RobotNerd
    I've been working on a Java application where I have to use JUnit for testing. I am learning it as I go. So far I find it to be useful, especially when used in conjunction with the Eclipse JUnit plugin. After playing around a bit, I developed a consistent method for building my unit tests for functions with no return values. I wanted to share it here and ask others to comment. Do you have any suggested improvements or alternative ways to accomplish the same goal? Common Return Values First, there's an enumeration which is used to store values representing test outcomes. public enum UnitTestReturnValues { noException, unexpectedException // etc... } Generalized Test Let's say a unit test is being written for: public class SomeClass { public void targetFunction (int x, int y) { // ... } } The JUnit test class would be created: import junit.framework.TestCase; public class TestSomeClass extends TestCase { // ... } Within this class, I create a function which is used for every call to the target function being tested. It catches all exceptions and returns a message based on the outcome. For example: public class TestSomeClass extends TestCase { private UnitTestReturnValues callTargetFunction (int x, int y) { UnitTestReturnValues outcome = UnitTestReturnValues.noException; SomeClass testObj = new SomeClass (); try { testObj.targetFunction (x, y); } catch (Exception e) { UnitTestReturnValues.unexpectedException; } return outcome; } } JUnit Tests Functions called by JUnit begin with a lowercase "test" in the function name, and they fail at the first failed assertion. To run multiple tests on the targetFunction above, it would be written as: public class TestSomeClass extends TestCase { public void testTargetFunctionNegatives () { assertEquals ( callTargetFunction (-1, -1), UnitTestReturnValues.noException); } public void testTargetFunctionZeros () { assertEquals ( callTargetFunction (0, 0), UnitTestReturnValues.noException); } // and so on... } Please let me know if you have any suggestions or improvements. Keep in mind that I am in the process of learning how to use JUnit, so I'm sure there are existing tools available that might make this process easier. Thanks!

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • RSpec test failing looking for a new set of eyes

    - by TheDelChop
    Guys, Here my issuse: I've got two models: class User < ActiveRecord::Base # Setup accessible (or protected) attributes for your model attr_accessible :email, :username has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end with this simple routes.rb file TestProj::Application.routes.draw do |map| resources :users do resources :tasks end end this schema: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "password_confirmation" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end and this controller for my tasks: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.find(params[:user_id]) end end Finally here is my test: require 'spec_helper' describe TasksController do before(:each) do @user = Factory(:user) @task = Factory(:task) end #GET New describe "GET New" do before(:each) do User.stub!(:find).with(@user.id.to_s).and_return(@user) @user.stub_chain(:tasks, :new).and_return(@task) end it "should return a new Task" do @user.tasks.should_receive(:new).and_return(@task) get :new, :user_id => @user.id end end end This test fails with the following output: 1) TasksController GET New should return a new Task Failure/Error: get :new, :user_id => @user.id undefined method `abstract_class?' for Object:Class # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:1234:in `class_of_active_record_descendant' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:900:in `base_class' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:655:in `reset_table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:647:in `table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:932:in `arel_table' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:927:in `unscoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/named_scope.rb:30:in `scoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:405:in `find' # ./app/controllers/tasks_controller.rb:15:in `load_user' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:431:in `_run__1954900289__process_action__943997142__callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `_run_process_action_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `run_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/callbacks.rb:17:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/rescue.rb:8:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/base.rb:113:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/rendering.rb:39:in `sass_old_process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/gems/haml-3.0.0.beta.3/lib/sass/plugin/rails.rb:26:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/testing.rb:12:in `process_with_new_base_test' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:390:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:328:in `get' # ./spec/controllers/tasks_controller_spec.rb:20 # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `inject' Can anybody help me understand what's going on here? It seems to be an RSpec problem since the controller action actually works, but I could be wrong. Thanks, Joe

    Read the article

  • How to unit test synchronized code

    - by gillJ
    Hi, I am new to Java and junit. I have the following peice of code that I want to test. Would appreciate if you could send your ideas about what's the best way to go about testing it. Basically, the following code is about electing a leader form a Cluster. The leader holds a lock on the shared cache and services of the leader get resumed and disposed if it somehow looses the lock on the cache. How can i make sure that a leader/thread still holds the lock on the cache and that another thread cannot get its services resumed while the first is in execution? public interface ContinuousService { public void resume(); public void pause(); } public abstract class ClusterServiceManager { private volatile boolean leader = false; private volatile boolean electable = true; private List<ContinuousService> services; protected synchronized void onElected() { if (!leader) { for (ContinuousService service : services) { service.resume(); } leader = true; } } protected synchronized void onDeposed() { if (leader) { for (ContinuousService service : services) { service.pause(); } leader = false; } } public void setServices(List<ContinuousService> services) { this.services = services; } @ManagedAttribute public boolean isElectable() { return electable; } @ManagedAttribute public boolean isLeader() { return leader; } public class TangosolLeaderElector extends ClusterServiceManager implements Runnable { private static final Logger log = LoggerFactory.getLogger(TangosolLeaderElector.class); private String election; private long electionWaitTime= 5000L; private NamedCache cache; public void start() { log.info("Starting LeaderElector ({})",election); Thread t = new Thread(this, "LeaderElector ("+election+")"); t.setDaemon(true); t.start(); } public void run() { // Give the connection a chance to start itself up try { Thread.sleep(1000); } catch (InterruptedException e) {} boolean wasElectable = !isElectable(); while (true) { if (isElectable()) { if (!wasElectable) { log.info("Leadership requested on election: {}",election); wasElectable = isElectable(); } boolean elected = false; try { // Try and get the lock on the LeaderElectorCache for the current election if (!cache.lock(election, electionWaitTime)) { // We didn't get the lock. cycle round again. // This code to ensure we check the electable flag every now & then continue; } elected = true; log.info("Leadership taken on election: {}",election); onElected(); // Wait here until the services fail in some way. while (true) { try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} if (!cache.lock(election, 0)) { log.warn("Cache lock no longer held for election: {}", election); break; } else if (!isElectable()) { log.warn("Node is no longer electable for election: {}", election); break; } // We're fine - loop round and go back to sleep. } } catch (Exception e) { if (log.isErrorEnabled()) { log.error("Leadership election " + election + " failed (try bfmq logs for details)", e); } } finally { if (elected) { cache.unlock(election); log.info("Leadership resigned on election: {}",election); onDeposed(); } // On deposition, do not try and get re-elected for at least the standard wait time. try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } else { // Not electable - wait a bit and check again. if (wasElectable) { log.info("Leadership NOT requested on election ({}) - node not electable",election); wasElectable = isElectable(); } try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } } public void setElection(String election) { this.election = election; } @ManagedAttribute public String getElection() { return election; } public void setNamedCache(NamedCache nc) { this.cache = nc; }

    Read the article

  • Kernel Logging disabled?

    - by Tiffany Walker
    uname -a Linux host 2.6.32-279.9.1.el6.i686 #1 SMP Tue Sep 25 20:26:47 UTC 2012 i686 i686 i386 GNU/Linux And start ups: ls /etc/init.d/ abrt-ccpp certmonger dovecot irqbalance matahari-broker mdmonitor nfs proftpd rpcbind single ypbind abrtd cgconfig functions kdump matahari-host messagebus nfslock psacct rpcgssd smartd abrt-oops cgred haldaemon killall matahari-network mysqld ntpd qpidd rpcidmapd sshd acpid cpuspeed halt ktune matahari-rpc named ntpdate quota_nld rpcsvcgssd sssd atd crond httpd lfd ma tahari-service netconsole oddjobd rdisc rsyslog sysstat auditd csf ip6tables lvm2-lvmetad matahari-sysconfig netfs portreserve restorecond sandbox tuned autofs cups iptables lvm2-monitor matahari-sysconfig-console network postfix rngd saslauthd udev-post But when I installed CSF/LFD I am getting nothing. LFD does not create lfd.log and nor are any blocks being logged in /var/log/messages either from the firewall. This is not natural. I looked for klogd but maybe I am looking in the wrong place for it to see if it is enabled? ls /etc/init.d/syslog ls: cannot access /etc/init.d/syslog: No such file or directory Also noticed no syslog? Also noticed this: csf -d 84.113.21.201 Adding 84.113.21.201 to csf.deny and iptables DROP... iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. I've never seen this before and this is a dedicated box. Also: ./csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing ipt_state/xt_state...OK Testing ipt_limit/xt_limit...OK Testing ipt_recent...OK Testing xt_connlimit...OK Testing ipt_owner/xt_owner...OK Testing iptable_nat/ipt_REDIRECT...OK Testing iptable_nat/ipt_DNAT...OK RESULT: csf should function on this server iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >