Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 106/404 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • what kind of credentials/prerequisites do you need to be a professional penetration tester ?

    - by dfafa
    does it take more than knowing Bt4 ? are there any one that just runs a scanner and no real labor involved ? would you be expected to be able to code your own exploits without having to dl from milw0rm and discover entry into a system by yourself, in other words, do you have to think outside the box even when there's so many tools that makes the job a lot easier ? would you ever be expected to be able to write your own scanners, exploits and etc ? i am also curious how people are able to write long pages of hex address, that magically causes some type of memory overflow...how are people guessing at the hex values for game hacks for instance ? are certification important ? what about formal school education ? I am a CS major.

    Read the article

  • Multiple asserts in single test?

    - by Gern Blandston
    Let's say I want to write a function that validates an email address with a regex. I write a little test to check my function and write the actual function. Make it pass. However, I can come up with a bunch of different ways to test the same function ([email protected]; [email protected]; test.test.com, etc). Do I put all the incantations that I need to check in the same, single test with several ASSERTS or do I write a new test for every single thing I can think of? Thanks!

    Read the article

  • How to test the performance of a user's PC in/for Flash?

    - by Jan P.
    Hey, I'm a developer on nice space MMO using Flash. On new PCs performance is quite good, but some features shouldn't be enabled on older PCs because the framerate drops to shit if we do. Flash wasn't made for this, but hey, pushing boundaries is fun. An example is fullscreen mode. Of course every user can manually enable it, but "advertising" it to a user with and oldie PC would be a bad idea - but for the Alienware crowd it would be dumb not to. So I want to find out how "capable" a user's PC is to decide if I should enable or disable some features for him. Any ideas? Thanks, Sujan

    Read the article

  • Grails 1.3.3: controller.redirectArgs.action not populated

    - by Matthias Hryniszak
    Does anyone knows what happened to controller.redirectArgs.action in the latest version of Grails (1.3.3)? It used to work properly but now I get NPE when I use it. class FooController { def someRedirect = { redirect(action:"bar") } } class FooControllerTests extends grails.test.ControllerUnitTestCase { void testSomeRedirect() { controller.someRedirect() assertEquals "bar", controller.redirectArgs.action } } In this case controller.redirectArgs is already null...

    Read the article

  • How to unit-test a Wicket component with AbstractAjaxTimerBehavior?

    - by Juha Syrjälä
    I have a Wicket panel that has AbstractAjaxTimeBehavior, that I'd like to unit test. How can I trigger a ajax event during the unit test that end up calling AbstractAjaxTimeBehavior's .onTimer(AjaxRequestTarget target) method? behavior = new AbstractAjaxTimerBehavior(Duration.seconds(pollingPeriodInSeconds)) { protected void onTimer(AjaxRequestTarget target) { // how to unit test this? } } add(behavior);

    Read the article

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • JUnit terminates child threads

    - by Marco
    Hi to all, When i test the execution of a method that creates a child thread, the JUnit test ends before the child thread and kills it. How do i force JUnit to wait for the child thread to complete its execution? Thanks

    Read the article

  • Passing a paramter/object to a ruby unit/test before running it using TestRunner

    - by Nahir Khan
    I'm building a tool that automates a process then runs some tests on it's own results then goes to do some other stuff. In trying to clean up my code I have created a separate file that just has the test cases class. Now before I can run these tests, I have to pass the class a couple of parameters/objects before they can be run. Now the problem is that I can't seem to find a way to pass a parameter/object to the test class. Right now I am thinking to generate a Yaml file and read it in the test class but it feels "wrong" to use a temporary file for this. If anyone has a nicer solution that would be great! *********Edit******* Example Code of what I am doing right now: #!/usr/bin/ruby require 'test/unit/ui/console/testrunner' require 'yaml' require 'TS_SampleTestSuite' automatingSomething() importantInfo = getImportantInfo() File.open('filename.yml', 'w') do |f| f.puts importantInfo.to_yaml end Test::Unit::UI::Console::TestRunner.run(TS_SampleTestSuite) Now in the example above TS_SampleTestSuite needs importantInfo, so the first "test case" is a method that just reads in the information from the Yaml file filname.yml. I hope that clears up some confusion.

    Read the article

  • How to test a site rigorously?

    - by Sarfraz
    Hello, I recently created a big portal site. It's time for putting it to test. How do you guys test a site rigorously? What are the ways and tools for that? Can we sort of mimic hundreds of virtual users visiting the site to see its load handling? The test should be for both security and speed Thanks in advance.

    Read the article

  • Where is the help.py for Android's monkeyrunner

    - by Keyboardsurfer
    Hi, I just can't find the help.py file in order to create the API reference for the monkeyrunner. The command described at the Android references monkeyrunner <format> help.py <outfile> does not work when i call monkeyrunner html help.py /path/to/place/the/doc.html. It's quite obvious that the help.py file is not found and the monkeyrunner also tells me "Can't open specified script file". But a locate on my system doesn't bring me a help.py file that has anything to do with monkeyrunner or Android. So my question is: Where did they hide the help.py file for creating the API reference?

    Read the article

  • how to access objects in run-time in qtp?

    - by Onnesh
    We have a function which accesses two types of controls like button and list box in standard windows app. The function uses only the control name as arguments, so there is no way qtp could understand what type of control it is. how to resolve this? Write 2 separate functions- 1 for button & another for list box?

    Read the article

  • Test assertions for tuples with floats

    - by Space_C0wb0y
    I have a function that returns a tuple that, among others, contains a float value. Usually I use assertAlmostEquals to compare those, but this does not work with tuples. Also, the tuple contains other data-types as well. Currently I am asserting every element of the tuple individually, but that gets too much for a list of such tuples. Is there any good way to write assertions for such cases?

    Read the article

  • How can I unit test a PHP class method that executes a command-line program?

    - by acoulton
    For a PHP application I'm developing, I need to read the current git revision SHA which of course I can get easily by using shell_exec or backticks to execute the git command line client. I have obviously put this call into a method of its very own, so that I can easily isolate and mock this for the rest of my unit tests. So my class looks a bit like this: class Task_Bundle { public function execute() { // Do things $revision = $this->git_sha(); // Do more things } protected function git_sha() { return `git rev-parse --short HEAD`; } } Of course, although I can test most of the class by mocking git_sha, I'm struggling to see how to test the actual git_sha() method because I don't see a way to create a known state for it. I don't think there's any real value in a unit test that also calls git rev-parse to compare the results? I was wondering about at least asserting that the command had been run, but I can't see any way to get a history of shell commands executed by PHP - even if I specify that PHP should use BASH rather than SH the history list comes up empty, I presume because the separate backticks executions are separate terminal sessions. I'd love to hear any suggestions for how I might test this, or is it OK to just leave that method untested and be careful with it when the app is being maintained in future?

    Read the article

  • Jmeter- HTTP Cache Manager, Unable to cache everything what it is being cached by Browser

    - by chinmay brahma
    I used HTTP Chache Manager to Cache files which are being cached in browser. I am successful of doing it for some of the pages. Number of files being cached in Jmeter is equal to Number of files being cached by browser. But in some cases : I found number files being cached is lesser than the files being cached by browser. Using Jmeter I found only 5 files are being cached but in real browser 12 files are getting cached. Thanks in advance

    Read the article

  • How to unit test private methods in BDD / TDD?

    - by robert_d
    I am trying to program according to Behavior Driven Development, which states that no line of code should be written without writing failing unit test first. My question is, how to use BDD with private methods? How can I unit test private methods? Is there better solution than: - making private methods public first and then making them private when I write public method that uses those private methods; or - in C# making all private methods internal and using InternalsVisibleTo attribute. Robert

    Read the article

  • How to configure a OCUnit test bundle for a framework?

    - by GuidoMB
    I've been developing a Mac OS X framework and I want to use OCUnit in my XCode 3.2.1 project. I've followed several tutorials on how to configure a OCUnit test bundle. The problem is that when I create a test case that uses a function that is defined in one of the framework's sources, I get a building error telling me that the symbol is not found. I made the test bundle dependent of my project's target as the tutorial said, but that doesn't seem to be problem. First I thought that I could solve this problem by dragging the framework's source files into the compile sources section within the Test bundle target, but then all the symbols referenced from that source file started to show up in the build errors, so that seems to not be a good solution/idea. How can I configure my unit test bundle so it builds properly?

    Read the article

  • Convert C# unit test names to English (testdox style)

    - by Igor Zevaka
    I have a whole bunch of unit tests written in MbUnit and I would like to generate plain English sentences from test names. The concept is introduced here: http://dannorth.net/introducing-bdd This is from the article: public class CustomerLookupTest extends TestCase { testFindsCustomerById() { ... } testFailsForDuplicateCustomers() { ... } ... } renders something like this: CustomerLookup - finds customer by id - fails for duplicate customers - ... Unfortunately the tool quoted in the above article (testdox) is Java based. Is there one for .NET? Sounds like this would be something pretty simple to write, but I simply don't have the bandwidth and want to use something already written.

    Read the article

  • Is PetraVM Jinx Beta 1 good?

    - by Brian T Hannan
    PetraVM recently came out with a Beta release of their Jinx product. Has anyone checked it out yet? Any feedback? By good, I mean: 1) easy to use 2) intuitive 3) useful 4) doesn't take a lot of code to integrate ... those kinds of things. Thanks guys!

    Read the article

  • ASP.NET MVC - How to Unit Test boundaries in the Repository pattern?

    - by JK
    Given a basic repository interface: public interface IPersonRepository { void AddPerson(Person person); List<Person> GetAllPeople(); } With a basic implementation: public class PersonRepository: IPersonRepository { public void AddPerson(Person person) { ObjectContext.AddObject(person); } public List<Person> GetAllPeople() { return ObjectSet.AsQueryable().ToList(); } } How can you unit test this in a meaningful way? Since it crosses the boundary and physically updates and reads from the database, thats not a unit test, its an integration test. Or is it wrong to want to unit test this in the first place? Should I only have integration tests on the repository? I've been googling the subject and blogs often say to make a stub that implements the IRepository: public class PersonRepositoryTestStub: IPersonRepository { private List<Person> people = new List<Person>(); public void AddPerson(Person person) { people.Add(person); } public List<Person> GetAllPeople() { return people; } } But that doesnt unit test PersonRepository, it tests the implementation of PersonRepositoryTestStub (not very helpful).

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >