Search Results

Search found 24508 results on 981 pages for 'joel test'.

Page 14/981 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • How can I test when a feature was added to Perl?

    - by Eric Strom
    Are there any services similar to codepad that will allow you to test Perl constructs on old versions of perl? Ideally, a system where you could enter an expression and it will tell you the oldest version of perl that it will work with. Of course it's possible to use CPANTS for this, but that seems like an abuse of the service (if only for making the BackPan bigger). And it could take several days/weeks to get decent test coverage on old versions.

    Read the article

  • How to test when a feature was added to Perl?

    - by Eric Strom
    Are there any services similar to codepad that will allow you to test Perl constructs on old versions of perl? Ideally, a system where you could enter an expression and it will tell you the oldest version of perl that it will work with. Of course it's possible to use CPANTS for this, but that seems like an abuse of the service (if only for making the BackPan bigger). And it could take several days/weeks to get decent test coverage on old versions.

    Read the article

  • Is there a command to test an SQL query without executing it? ( MySQL or ANSI SQL )

    - by Petruza
    Is there anything like this: TEST DELETE FROM user WHERE somekey = 45; That can return any errors, for example that somekey doesn't exist, or some constraint violation or anything, and reporting how many rows would be affected, but not executing the query? I know you can easily turn any query in a select query that has no write or delete effect in any row, but that can lead to errors and it's not very practical if you want to test and debug many queries.

    Read the article

  • TFS: Branching. How to map a branch to IIS for local test

    - by DarkJackO
    Hi, I think there's something I don't understand about Branching How can I run my website from localhost to test my changes made on a Branch Let's say my branch structure is -Dev -UI -App Main -UI -App The project UI and App from the main are map in my IIS, it's all working well Now I want to make some changes in the UI project from Dev branch, and I want to test these changes before I merge them to Main Thanks

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • java development of products and automation development

    - by momo
    I'm a java developer working on j2ee development, on real products (not inhouse tools). I found another job to work on development of test automation frameworks / continuous integration. is development of test automation frameworks will affect my skill set ?is it considered to be less reputed and less needed? (the reason im confused is that the new role salary is higher).. do you think I should give up this offer and continue seeking a development role within the domain technolgies (java / j2ee) ?

    Read the article

  • Thinking Sphinx not working in test mode

    - by J. Pablo Fernández
    I'm trying to get Thinking Sphinx to work in test mode in Rails. Basically this: ThinkingSphinx::Test.init ThinkingSphinx::Test.start freezes and never comes back. My test and devel configuration is the same for test and devel: dry_setting: &dry_setting adapter: mysql host: localhost encoding: utf8 username: rails password: blahblah development: <<: *dry_setting database: proj_devel socket: /tmp/mysql.sock # sphinx requires it test: <<: *dry_setting database: proj_test socket: /tmp/mysql.sock # sphinx requires it and sphinx.yml development: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin test: enable_star: 1 min_infix_len: 2 bin_path: /opt/local/bin production: enable_star: 1 min_infix_len: 2 The generated config files, config/development.sphinx.conf and config/test.sphinx.conf only differ in database names, directories and similar things; nothing functional. Generating the index for devel goes without an issue $ rake ts:in (in /Users/pupeno/proj) default config Generating Configuration to /Users/pupeno/proj/config/development.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/development.sphinx.conf'... indexing index 'user_core'... collected 7 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, 100.0% done sorted 0.0 Mhits, 99.8% done total 7 docs, 422 bytes total 0.098 sec, 4320.80 bytes/sec, 71.67 docs/sec indexing index 'user_delta'... collected 0 docs, 0.0 MB collected 0 attr values sorted 0.0 Mvalues, nan% done total 0 docs, 0 bytes total 0.010 sec, 0.00 bytes/sec, 0.00 docs/sec distributed index 'user' can not be directly indexed; skipping. but when I try to do it for test it freezes: $ RAILS_ENV=test rake ts:in (in /Users/pupeno/proj) DEPRECATION WARNING: require "activeresource" is deprecated and will be removed in Rails 3. Use require "active_resource" instead.. (called from /Users/pupeno/.rvm/gems/ruby-1.8.7-p249/gems/activeresource-2.3.5/lib/activeresource.rb:2) default config Generating Configuration to /Users/pupeno/proj/config/test.sphinx.conf Sphinx 0.9.8.1-release (r1533) Copyright (c) 2001-2008, Andrew Aksyonoff using config file '/Users/pupeno/proj/config/test.sphinx.conf'... indexing index 'user_core'... It's been there for more than 10 minutes, the user table has 4 records. The database directory look quite diferently, but I don't know what to make of it: $ ls -l db/sphinx/development/ total 96 -rw-r--r-- 1 pupeno staff 196 Mar 11 18:10 user_core.spa -rw-r--r-- 1 pupeno staff 4982 Mar 11 18:10 user_core.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_core.sph -rw-r--r-- 1 pupeno staff 3067 Mar 11 18:10 user_core.spi -rw-r--r-- 1 pupeno staff 84 Mar 11 18:10 user_core.spm -rw-r--r-- 1 pupeno staff 6832 Mar 11 18:10 user_core.spp -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spa -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spd -rw-r--r-- 1 pupeno staff 417 Mar 11 18:10 user_delta.sph -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spi -rw-r--r-- 1 pupeno staff 0 Mar 11 18:10 user_delta.spm -rw-r--r-- 1 pupeno staff 1 Mar 11 18:10 user_delta.spp $ ls -l db/sphinx/test/ total 0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.spl -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp0 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp1 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp2 -rw-r--r-- 1 pupeno staff 0 Mar 11 18:11 user_core.tmp7 Nothing gets added to a log when this happens. Any ideas where to go from here? I can run the command line manually: /opt/local/bin/indexer --config config/test.sphinx.conf --all which generates the output as the rake ts:in, so no help there.

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending on if test is run in isola

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • C# unit test code questions

    - by 5YrsLaterDBA
    We start use C# build-in unit test functionality. I have VisualStudio 2008 created unit test code for me. I have few question above the generated code. Following are code I copied from the generated file: #region Additional test attributes // //You can use the following additional attributes as you write your tests: // //Use ClassInitialize to run code before running the first test in the class //[ClassInitialize()] //public static void MyClassInitialize(TestContext testContext) //{ //} // //Use ClassCleanup to run code after all tests in a class have run //[ClassCleanup()] //public static void MyClassCleanup() //{ //} // //Use TestInitialize to run code before running each test //[TestInitialize()] //public void MyTestInitialize() //{ //} // //Use TestCleanup to run code after each test has run //[TestCleanup()] //public void MyTestCleanup() //{ //} // #endregion If I need the initialize and cleanup methods, do I need to remove those "My" from the method name when I enable them? //Use ClassInitialize to run code before running the first test in the class //[ClassInitialize()] //public static void MyClassInitialize(TestContext testContext) //{ //} Do I need to call the "MyClassInitialize" method somewhere before running the first test or it will be called automatically before other methods are called. Similar questions for other three methods, are they called automatically at right time frame?

    Read the article

  • How can I effectively test against the Windows API?

    - by Billy ONeal
    I'm still having issues justifying TDD to myself. As I have mentioned in other questions, 90% of the code I write does absolutely nothing but Call some Windows API functions and Print out the data returned from said functions. The time spent coming up with the fake data that the code needs to process under TDD is incredible -- I literally spend 5 times as much time coming up with the example data as I would spend just writing application code. Part of this problem is that often I'm programming against APIs with which I have little experience, which forces me to write small applications that show me how the real API behaves so that I can write effective fakes/mocks on top of that API. Writing implementation first is the opposite of TDD, but in this case it is unavoidable: I do not know how the real API behaves, so how on earth am I going to be able to create a fake implementation of the API without playing with it? I have read several books on the subject, including Kent Beck's Test Driven Development, By Example, and Michael Feathers' Working Effectively with Legacy Code, which seem to be gospel for TDD fanatics. Feathers' book comes close in the way it describes breaking out dependencies, but even then, the examples provided have one thing in common: The program under test obtains input from other parts of the program under test. My programs do not follow that pattern. Instead, the only input to the program itself is the system upon which it runs. How can one effectively employ TDD on such a project?

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending whether test is run in iso

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • Eclipse doesn't see my new junit test

    - by morgancodes
    I'm using eclipse to run the tests in a single junit(4) test class. The tests in the class all run just fine. Then I add an additional test and run the class through the test running in ecplise again. Only the old tests are run. The new test isn't seen by eclipse. There's no error or anything, it's just as if eclipse is looking at an old version of the test. If I run the tests using maven, everything works fine. Additionally, after I run the tests in maven, ecplipse can see and run the new test correctly. Any ideas what's going on? Any ideas how to get ecplipse's test runner to see my new test cases?

    Read the article

  • Can Django flush its database(s) between every unit test

    - by mikem
    Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset. Consider a test which pulls data out of the database by primary key: class ChangeLogTest(django.test.TestCase): def test_one(self): do_something_which_creates_two_log_entries() log = LogEntry.objects.get(id=1) assert_log_entry_correct(log) log = LogEntry.objects.get(id=2) assert_log_entry_correct(log) This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails. This is actually a two part question: Is it possible to force ./manage.py test to flush the database between each test case? Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?

    Read the article

  • Test All Features of Windows Phone 7 On Your PC

    - by Matthew Guay
    Are you developer or just excited about the upcoming Windows Phone 7, and want to try it out now?  Thanks to free developer tools from Microsoft and a new unlocked emulator rom, you can try out most of the exciting features today from your PC. Last week we showed you how to try out Windows Phone 7 on your PC and get started developing for the upcoming new devices.  We noticed, however, that the emulator only contains Internet Explorer Mobile and some settings.  This is still interesting to play around with, but it wasn’t the full Windows Phone 7 experience. Some enterprising tweakers discovered that more applications were actually included in the emulator, but were simply hidden from users.  Developer Dan Ardelean then figured out how to re-enable these features, and released a tweaked emulator rom so everyone can try out all of the Windows Phone 7 features for themselves.  Here we’ll look at how you can run this new emulator image on your PC, and then look at some interesting features in Windows Phone 7. Editor Note: This modified emulator image is not official, and isn’t sanctioned by Microsoft. Use your own judgment when choosing to download and use the emulator. Setting Up Emulator Rom To test-drive Windows Phone 7 on your PC, you must first download and install the Windows Phone Developer Tools CTP (link below).  Follow the steps we showed you last week at: Try out Windows Phone 7 on your PC today.  Once it’s installed, go ahead and run the default emulator as we showed to make sure everything works ok. Once the Windows Phone Developer Tools are installed and running, download the new emulator rom from XDA Forums (link below).  This will be a zip file, so extract it first. Note where you save the file, as you will need the address in the next step. Now, to run our new emulator image, we need to open the emulator in command line and point to the new rom image.  To do this, browse to the correct directory, depending on whether you’re running the 32 bit or 64 bit version of Windows: 32 bit: C:\Program Files\Microsoft XDE\1.0\ 64 bit: C:\Program Files (x86)\Microsoft XDE\1.0\ Hold your Shift key down and right-click in the folder.  Choose Open Command Window here. At the command prompt, enter XDE.exe followed by the location of your new rom image.  Here, we downloaded the rom to our download folder, so at the command prompt we entered: XDE.exe C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin The emulator loads … with the full Windows Phone 7 experience! To make it easier, let’s make a shortcut on our desktop to load the emulator with the new rom directly.  Right-click on your desktop (or any folder you want to create the shortcut in), select New, and then Shortcut. Now, in the box, we need to enter the path for the emulator followed by the location of our rom.  Both items must be in quotes.  So, in our test, we entered the following: 32 bit: “C:\Program Files\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” 64 bit: “C:\Program Files (x86)\Microsoft XDE\1.0\” “C:\Users\Matthew\Downloads\WM70Full\WM70Full.bin” Make sure to enter the correct location of the new emulator rom for your computer, and keep both items in separate quotes.  Click next when you’ve entered the location. Name the shortcut; we named it Windows Phone 7, but simply enter whatever you’d like.  Click Finish when you’re done. You should now have a nice Windows Phone icon and your fully functional shortcut!  Double-click it to run the Windows Phone 7 emulator as above. Features in the Unlocked Windows Phone 7 Emulator So let’s look at what you can do with this new emulator.  Almost everything you’ve seen in demos from the Mobile World Conference and Mix’10 are right here for you to play with.  Here’s the application menu, which you can access by clicking on the arrow on the top of the home screen, which shows how much stuff they’ve got in this!   And, of course, even the home screen itself shows much more activity than it did in the original emulator. Let’s check out some of these sections.  Here’s Zune running on Windows Phone 7, and the Zune Marketplace.  The animations are beautiful, so be sure to check this out yourself. The new picture hub is much nicer than any picture viewer included with Windows Mobile in the past…   Stay productive, and on schedule with the new Calendar. The XBOX hub gives us only a hint of things to come, and the links to games now are simply placeholders. Here’s a look at the Office hub.  This doesn’t show up on the homescreen right now, but you can access it in the applications menu.  Office obviously still has a lot of work left on it, but even at a glance here it looks like it includes a lot more functionality than Office Mobile in Windows Mobile 6. Here’s a look at each of the three apps: Word, Excel, and OneNote, and the formatting pallet in Office apps.   This emulator also includes a lot more settings than the default one, including settings for individual applications. You can even activate the screen lock, and try out the lift-to-peek-or-unlock feature… Finally, this version of Windows Phone 7 includes a very nice SystemInfo app with an advanced task manager.  We hope this is still available when the actual phones are released. Conclusion If you’re excited about the upcoming Windows Phone 7 series, or simply want to learn more about what’s coming, this is a great way to test it out.  With these exciting new hubs and applications, there’s something here for everyone.  Let us know what you like most about Windows Phone 7 and what your favorite app or hub is. Links Please note: These roms are not officially supported by Microsoft, and could be taken down. Download the unlocked Windows Phone 7 emulator from XDA Forums – click the link in this post to download How the unlocked emulator image was created Similar Articles Productive Geek Tips Try out Windows Phone 7 on your PC todayGet stats on your Ruby on Rails codeDisable Windows Vista’s Built-in CD/DVD Burning FeaturesWeek in Geek – The Slick Windows 7 File Copy Animation EditionGeek Fun: Virtualized Old School Windows – Windows 95 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10

    Read the article

  • DTracing a PHPUnit Test: Looking at Functional Programming

    - by cj
    Here's a quick example of using DTrace Dynamic Tracing to work out what a PHP code base does. I was reading the article Functional Programming in PHP by Patkos Csaba and wondering how efficient this stype of programming is. I thought this would be a good time to fire up DTrace and see what is going on. Since DTrace is "always available" even in production machines (once PHP is compiled with --enable-dtrace), this was easy to do. I have Oracle Linux with the UEK3 kernel and PHP 5.5 with DTrace static probes enabled, as described in DTrace PHP Using Oracle Linux 'playground' Pre-Built Packages I installed the Functional Programming sample code and Sebastian Bergmann's PHPUnit. Although PHPUnit is included in the Functional Programming example, I found it easier to separately download and use its phar file: cd ~/Desktop wget -O master.zip https://github.com/tutsplus/functional-programming-in-php/archive/master.zip wget https://phar.phpunit.de/phpunit.phar unzip master.zip I created a DTrace D script functree.d: #pragma D option quiet self int indent; BEGIN { topfunc = $1; } php$target:::function-entry /copyinstr(arg0) == topfunc/ { self->follow = 1; } php$target:::function-entry /self->follow/ { self->indent += 2; printf("%*s %s%s%s\n", self->indent, "->", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); } php$target:::function-return /self->follow/ { printf("%*s %s%s%s\n", self->indent, "<-", arg3?copyinstr(arg3):"", arg4?copyinstr(arg4):"", copyinstr(arg0)); self->indent -= 2; } php$target:::function-return /copyinstr(arg0) == topfunc/ { self->follow = 0; } This prints a PHP script function call tree starting from a given PHP function name. This name is passed as a parameter to DTrace, and assigned to the variable topfunc when the DTrace script starts. With this D script, choose a PHP function that isn't recursive, or modify the script to set self->follow = 0 only when all calls to that function have unwound. From looking at the sample FunSets.php code and its PHPUnit test driver FunSetsTest.php, I settled on one test function to trace: function testUnionContainsAllElements() { ... } I invoked DTrace to trace function calls invoked by this test with # dtrace -s ./functree.d -c 'php phpunit.phar \ /home/cjones/Desktop/functional-programming-in-php-master/FunSets/Tests/FunSetsTest.php' \ '"testUnionContainsAllElements"' The core of this command is a call to PHP to run PHPUnit on the FunSetsTest.php script. Outside that, DTrace is called and the PID of PHP is passed to the D script $target variable so the probes fire just for this invocation of PHP. Note the quoting around the PHP function name passed to DTrace. The parameter must have double quotes included so DTrace knows it is a string. The output is: PHPUnit 3.7.28 by Sebastian Bergmann. ......-> FunSetsTest::testUnionContainsAllElements -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::singletonSet <- FunSets::singletonSet -> FunSets::union <- FunSets::union -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertTrue -> PHPUnit_Framework_Assert::isTrue <- PHPUnit_Framework_Assert::isTrue -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint_IsTrue::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertTrue -> FunSets::contains -> FunSets::{closure} -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains -> FunSets::contains -> FunSets::{closure} <- FunSets::{closure} <- FunSets::contains <- FunSets::{closure} <- FunSets::contains -> PHPUnit_Framework_Assert::assertFalse -> PHPUnit_Framework_Assert::isFalse -> {closure} -> main <- main <- {closure} <- PHPUnit_Framework_Assert::isFalse -> PHPUnit_Framework_Assert::assertThat -> PHPUnit_Framework_Constraint::count <- PHPUnit_Framework_Constraint::count -> PHPUnit_Framework_Constraint::evaluate -> PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint_IsFalse::matches <- PHPUnit_Framework_Constraint::evaluate <- PHPUnit_Framework_Assert::assertThat <- PHPUnit_Framework_Assert::assertFalse <- FunSetsTest::testUnionContainsAllElements ... Time: 1.85 seconds, Memory: 3.75Mb OK (9 tests, 23 assertions) The periods correspond to the successful tests before and after (and from) the test I was tracing. You can see the function entry ("->") and return ("<-") points. Cross checking with the testUnionContainsAllElements() source code confirms the two singletonSet() calls, one union() call, two assertTrue() calls and finally an assertFalse() call. These assertions have a contains() call as a parameter, so contains() is called before the PHPUnit assertion functions are run. You can see contains() being called recursively, and how the closures are invoked. If you want to focus on the application logic and suppress the PHPUnit function trace, you could turn off tracing when assertions are being checked by adding D clauses checking the entry and exit of assertFalse() and assertTrue(). But if you want to see all of PHPUnit's code flow, you can modify the functree.d code that sets and unsets self-follow, and instead change it to toggle the variable in request-startup and request-shutdown probes: php$target:::request-startup { self->follow = 1 } php$target:::request-shutdown { self->follow = 0 } Be prepared for a large amount of output!

    Read the article

  • Install Quartz.Net as a windows service and Test installation

    - by Tarun Arora
    In this blog post I’ll be covering, 01: Where to download Quartz.net from 02: How to install Quartz.net as a Windows service 03: Test the Quartz.net Installation If you are new to Quartz.net I would recommend reading the blog post on a brief introduction to Quartz.net. 01 – Where to download Quartz.net? http://sourceforge.net/projects/quartznet/files/quartznet/       Currently version  Quartz.Net 2.0.1 is the recommended download version. 02 – How to install Quartz.net as a Windows service         Go to the download location and unzip the Quartz.net package Navigate to the folder Quartz.Net \ Server \ bin – This is where you will find different .net version installers of the quartz.net packages. For example in the screen shot above, you can see the Quartz.net .net 3.5 and .net 4 packages. Open up the Quartz.net .net 4.0 folder, this folder contains the files you need to install Quartz.net as a windows service Copy the contents of the folder Downloads\Quartz.NET-2.0.1\server\bin\4.0 to the folder %program files%\Quartz.net   5. Open up a new CMD as an administrator and run the below command to install Quartz.net as a windows service /> Quartz.Server.exe install 6. How do I know that Quartz.Net service has installed as a Windows service? Go to run prompt and type ‘services.msc’ you should now see all the windows services installed on your machine. Navigate down to look for Quartz.Net. The service installs itself as an automatic startup Type and log on as ‘Local System’. You can easily change this to your prefer account that you would like to run the service as. If you wanted to name the Quartz service something else then that’s also possible… Can I change the default display name of the quartz.net windows service? Yes, you can! Navigate to C:\Program Files (x86)\Quartz.Net\ and open up the config file ‘quartz.config’ - You can change the instance name - You can change the default thread count of 10 - The port that the service listens to (by default this is port 555) A blog post on more configuration details can be found here. 03 – Test Quartz.Net windows service installation So, I have installed Quartz.Net as a windows service, how do I test whether my installation has been successful. Open up cmd as an administrator and run the below command, C:\Program Files (x86)\Quartz.Net> Quartz.Server.exe –i Since by default the Quartz.net windows service writes INFO level diagnostics (this can be changed from Quartz.Server.exe.config) you should see the service information show up on the console. For instance in the example above I can see that the service is running in a NON CLUSTERED mode, its currently not started and is currently in standby mode with 0 number of jobs executed so far… This was second in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to run your first scheduled task using Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Rescue overdue offshore projects and convince management to use automated tests

    - by oazabir
    I have published two articles on codeproject recently. One is a story where an offshore project was two months overdue, my friend who runs it was paying the team from his own pocket and he was drowning in ever increasing number of change requests and how we brainstormed together to come out of that situation. Tips and Tricks to rescue overdue projects Next one is about convincing management to go for automated test and give developers extra time per sprint, at the cost of reduced productivity for couple of sprints. It’s hard to negotiate this with even dev leads, let alone managers. Whenever you tell them - there’s going to be less features/bug fixes delivered for next 3 or 4 sprints because we want to automate the tests and reduce manual QA effort; everyone gets furious and kicks you out of the meeting. Especially in a startup where every sprint is jam packed with new features and priority bug fixes to satisfy various stakeholders, including the VCs, it’s very hard to communicate the benefits of automated tests across the board. Let me tell you of a story of one of my startups where I had the pleasure to argue on this and came out victorious. How to convince developers and management to use automated test instead of manual test If you like these, please vote for me!

    Read the article

  • What are the disadvantages of automated testing?

    - by jkohlhepp
    There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they? Here's a few that I've come up with: Requires more initial developer time for a given feature Requires a higher skill level of team members Increase tooling needs (test runners, frameworks, etc.) Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake? Edit I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet. Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Silverlight 4, MVVM and Test-Driven Development

    - by Martin Hinshelwood
    As part of his UK tour Microsoft's Jesse Liberty will be talking in Edinburgh for an evening on Silverlight 4. [Register Now, there are some places left]  The Talk MVVM and Silverlight to build test-driven programs Understanding Refactoring and Dependency Injection A Walk through of a non-trivial application The Speaker Jesse Liberty, Silverlight Geek, is a Developer Community Program Manager for Microsoft (US). Lately he has been focused on Component-based, Test-Driven, Cross-platform line-of-business application development, and has led the development of the open source  Silverlight HyperVideo Platform. Liberty is the author of over two dozen books, and his blog is a required resource for Silverlight programmers. His twenty years of programming experience include stints as a Distinguished Software Engineer at AT&T; Vice President of Human-Computer Interaction at Citibank and Software Architect at PBS/Learning Link. The Venue We are meeting at Microsoft's offices in Edinburgh in Waterloo Place. This is the building on the corner of North Bridge at the east end of Princes Street. Parking can be found at the nearby Greenside Row car park which is just off Leith Walk (used for the Omni Centre). The venue is approximately 2-3 minutes walk away from Edinburgh Waverly train station. The Agenda 18:30 Doors open 19:00 Welcome 19:10 Part 1 20:00 Break 20:10 Part 2 20:50 Feedback and Prizes 21:00 End   [Register Now, there are some places left] Technorati Tags: Silverlight,MVVM,TDD

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >