Search Results

Search found 12844 results on 514 pages for 'manual testing'.

Page 43/514 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • man kaio: No manual entry for kaio.

    - by Daniel
    I trussed a process, and they are lines as below. And I want to know the definition of kaio, but there is no manual entry for kaio, so whether can I get the definition? /1: kaio(AIOWRITE, 259, 0x3805B2A00, 8704, 0x099C9E000755D3C0) = 0 /1: kaio(AIOWRITE, 259, 0x380CF9200, 14336, 0x099CC0000755D5B8) = 0 /1: kaio(AIOWRITE, 259, 0x381573600, 8704, 0x099CF8000755D7B0) = 0 /1: kaio(AIOWRITE, 259, 0x381ACA600, 8192, 0x099D1A000755D9A8) = 0 /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418032576 /1: timeout: 600.000000 sec /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418033080 /1: timeout: 600.000000 sec /1: kaio(AIOWAIT, 0xFFFFFFFF7FFFD620) = 4418033584 /1: timeout: 600.000000 sec

    Read the article

  • Manual alternative to mod_deflate

    - by Bobby Jack
    Say I don't have mod_deflate compiled into apache, and I don't feel like recompiling right now. What are the downsides to a manual approach, e.g. something like: AddEncoding x-gzip .gz RewriteCond %{HTTP_ACCEPT_ENCODING} gzip RewriteRule ^/css/styles.css$ /css/styles.css.gz (Note: I'm aware that the specifics of that RewriteCond need to be tweaked slightly)

    Read the article

  • A good manual on c#

    - by I_S_W
    Hey all; i have learned the c# language moving away from java , i have learned all what's new in that language and i found it to be pretty much interesting , i would like to enthusiasm to attempt implementing some projects but yet i have no ideas in mind. does anyone happen to know a good lab manual or something similar? Thanks Alot

    Read the article

  • how to enter manual time stamp in get date ()

    - by Arunachalam
    how to enter manual time stamp in get date () ? select conver(varchar(10),getdate(),120) returns 2010-06-07 now i want to enter my own time stamp in this like 2010-06-07 10.00.00.000 i m using this in select * from sample table where time_stamp ='2010-06-07 10.00.00.000' since i m trying to automate this query i need the current date but i need different time stamp can it be done .

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • Are "TDD Tests" different to Unit Tests?

    - by asgeo1
    I read this article about TDD and unit testing: http://stephenwalther.com/blog/archive/2009/04/11/tdd-tests-are-not-unit-tests.aspx I think it was an excellent article. The author makes a distinction between what he calls "TDD Tests" and unit testing. They appear to be different tests to him. Previous to reading this article I thought unit tests were a by-product of TDD. I didn't realise you might also create "TDD tests". The author seems to imply that creating unit tests is not enough for TDD as the granularity of a unit test is too small for what we are trying to achieve with TDD. So his TDD tests might test a few classes at once. At the end of the article there is some discussion from the author with some other people about whether there really is a distinction between "TDD Tests" and unit testing. Seems to be some contention around this idea. The example "TDD tests" the author showed at the end of the article just looked like normal MVC unit tests to me - perhaps "TDD tests" vs unit tests is just a matter of semantics? I would like to hear some more opinions on this, and whether there is / isn't a distinction between the two tests.

    Read the article

  • Oracle Database 11g R2 támogatott SAP alatt is

    - by Lajos Sárecz
    Húsvét óta már SAP alatt is használható az Oracle Database 11g R2. Köztudott, hogy az SAP csak a Release 2-re ad ki támogatást, így ez most egy igazán örömteli hír az SAP felhasználóknak, hiszen az alábbi 11g R2 újdonságokat tudják alkalmazni SAP környezetben: • Advanced Compression opció (táblára, RMAN mentésre, expdp-re, Data Guard hálózatra) • Real Application Testing • Oracle Database 11g Release 2 Database Vault • Oracle Database 11g Release 2 RAC • Advanced Encryption táblaterekre, RMAN mentésekre, expdp-re, Data Guard hálózatra • Direct NFS • Deferred Segments • Online Patching Azaz például tömöríthetové válik az SAP adatbázisa, vagy az abból készített mentések. Az eddigi tapasztalatok szerint a tömörítés aránya adatbázistól függoen 2-4-szeres. Az adatbázis upgrade és minden egyéb adatbázis infrastruktúrát érinto változatatás kockázata jelentosen csökkentheto lesz a Real Application Testing alkalmazásával. A rendszergazdai szerepkörök szeparaláhatóvá válnak a Database Vault felhasználásával. A Real Application Clusters 11g R2 újdonságai is elérheto lesznek. A Transparent Data Encryption révén a táblaterek és a mentések titkosíthatók úgy, hogy az alkalmazás számára mindez transzparens, azonban a médiához közvetlenül hozzáférve nem lesznek visszafejthetok az adatok. Támogatott lesz a Direct NFS kliens, ezzel NFS elérési sebesség jelentosen javul. A Deffered Segments révén pedig a tábla szegmensek csak akkor kerülnek lefoglalásra, amikor adat kerül a táblába. Ez azért hasznos, mert általában alkalmazások telepítésekor létrejön minden tábla, azonban sok táblába nem kerül adat. Ezáltal mind a telepítés ideje, mind az adatbázis mérete csökkentheto. Az Online Patching pedig lehetové teszi a leállításmentes patch telepítést. Hát azt gondolom ezek vonzó lehetoségek, érdemes betervezni a közeljövobe az SAP rendszerek alatti adatbázis frissítését, hiszen a 10g verzió Premier Support idén nyáron lejár. Az upgrade-hez pedig mindenképp javaslom a Real Application Testing használatát, amivel az éles terhelés mellett teszthelheto teszt környezetben az upgrade. A Sun Oracle Database Machine és az Exadata sajnos még nem támogatott SAP alatt, mivel az ASM certifikáció még nem zárult le. A hírek szerint 2011 elejére várható, hogy ez megtörténik.

    Read the article

  • Introducing NFakeMail

    - by João Angelo
    Ever had to resort to custom code to control emails sent by an application during integration and/or system testing? If you answered yes then you should definitely continue reading. NFakeMail makes it easier for developers to do integration/system testing on software that sends emails by providing a fake SMTP server. You’ll no longer have to manually validate the email sending process. It’s developed in C# and IronPython and targets the .NET 4.0 framework. With NFakeMail you can easily automate the testing of components that rely on sending mails while doing its job. Let’s take a look at some sample code, we start with a simple class containing a method that sends emails. class Notifier { public void Notify() { using (var smtpClient = new SmtpClient("localhost", 10025)) { smtpClient.Send("[email protected]", "[email protected]", "S1", "."); smtpClient.Send("[email protected]", "[email protected]", "S2", ".."); } } } Then to automate the tests for this method we only need to the following: [Test] public void Notify_T001() { using (var server = new FakeSmtpServer(10025)) { new Notifier().Notify(); // Verifies two messages are received in the next five seconds var messages = server.WaitForMessages(count: 2, timeout: 5000); // Verifies the message sender Debug.Assert(messages.All(m => m.From.Address == "[email protected]")); } } The created FakeSmtpServer instance will act as a simple SMTP server and intercept the messages sent by the Notifier class. It’s even possible to verify some fields of each intercepted message and by default all intercepted messages are saved to the file system in MIME format.

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • What are the design principles that promote testable code? (designing testable code vs driving design through tests)

    - by bot
    Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?

    Read the article

  • How can "today's date" be varied for unit testing purposes?

    - by ck
    I use VS2008 targetting .NET 2.0 Framework, and, just in case, no I can't change this :) I have a DateCalculator class. Its method GetNextExpirationDate attempts to determine the next expiration, internally using DateTime.Today as a baseline date. As I was writing unit tests, I realized that I wanted to test GetNextExpirationDate for different 'today' dates. What's the best way to do this? Here are some alternatives I've considered: Expose a property/overloaded method with argument baselineDate and only use it from the unit test. In actual client code, disregard the property/overloaded method in favour of the method that defaults baselineDate to DateTime.Today. I'm reluctant to do this as it makes the public interface of the DateCalculator class awkward. Create a protected field called baselineDate that is internally set to DateTime.Today. When testing, derive a DateCalculatorForTesting from DateCalculator and set baslineDate via the constructor. It keeps the public interface clean, but still isn't great - baselineDate was made protected and a derived class is required, both solely for testing. Use extension methods. I tried this after adding the ExtensionAttribute, then realized it wouldn't work because extension methods can't access private/protected variables. I initially thought this was really quite an elegant solution. :( I'd be interested in hearing what others think.

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • What's the state of PHP unit testing frameworks in 2010?

    - by Pekka
    As far as I can see, PHPUnit is the only serious product in the field at the moment. It is widely used, is integrated into Continuous Integration suites like phpUnderControl, and well regarded. The thing is, I don't really like working with PHPUnit. I find it hard to set up (PEAR is the only officially supported installation method, and I hate PEAR), sometimes complicated to work with and, correct me if I'm wrong, lacking executability from a web page context (i.e. no CLI, which would really be nice when developing a web app.) The only competition to I can see is Simpletest, which looks very nice but hasn't seen a new release for almost two years, which tends to rule it out for me - Unit Testing is quite a static field, true, but as I will be deploying those tests alongside web applications, I would like to see active development on the project, at least for security updates and such. There is a SO question that pretty much confirms what I'm saying: Simple test vs PHPunit Seeing that that is almost two years old as well, though, I think it's time to ask again: Does anybody know any other serious feature-complete unit testing frameworks? Am I wrong in my criticism of PHPUnit? Is there still development going on for SimpleTest?

    Read the article

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

  • Reading HttpURLConnection InputStream - manual buffer or BufferedInputStream?

    - by stormin986
    When reading the InputStream of an HttpURLConnection, is there any reason to use one of the following over the other? I've seen both used in examples. Manual Buffer: while ((length = inputStream.read(buffer)) > 0) { os.write(buf, 0, ret); } BufferedInputStream is = http.getInputStream(); bis = new BufferedInputStream(is); ByteArrayBuffer baf = new ByteArrayBuffer(50); int current = 0; while ((current = bis.read()) != -1) { baf.append(current); } EDIT I'm still new to HTTP in general but one consideration that comes to mind is that if I am using a persistent HTTP connection, I can't just read until the input stream is empty right? In that case, wouldn't I need to read the message length and just read the input stream for that length? And similarly, if NOT using a persistent connection, is the code I included 100% good to go in terms of reading the stream properly?

    Read the article

  • Is IE Collection reliable tool for testing with various versions of Internet Explorer?

    - by rsturim
    On my Windows machine -- I typically test different versions of Internet Explorer using an array of Virtual Machine instances (which obviously requires a fair amount of investment in time and money). In a pinch I have also used IETester -- which at times can be a little unreliable. However, I just discovered IE Collection and was wondering if people have used it -- and can I rely on it for web page testing purposes? Would love to know what you think.

    Read the article

  • Codeigniter: Combining activeRecord with manual queries?

    - by Industrial
    Hi everybody, I've though a bit about the activerecord vs. manual queries in Codeigniter. ActiveRecord is awesome when it's all about standard queries and holds development time really low. However, when there's a need to add some complexity to the queries, the ActiveRecord gets quite complicated to work with. Sub queries or complex joins gives atleast me a lot of headache. Since the current "$this-db-query" -call immediately executes the set query, it can't be combined with normal activeRecord calls. So, what can I do to combine the two methods? Thanks!

    Read the article

  • What libraries are available to record a user browsing your website for usability testing?

    - by John
    I remember seeing a JavaScript library a long time ago that offered the ability to record where users clicked and moved their mouse on your website, in order to do usability testing. I can't seem to find it anymore. Are there any libraries out there that do something like this? What I'm looking for is something like http://clixpy.com/, where you can include some javascript on a page and get videos of what users do.

    Read the article

  • .NET remoting manual configuration

    - by Quandary
    Question: In .NET remoting, I configure the client like this from a config file RemotingConfiguration.Configure("AsyncRemoteAPIclient.exe.config", False) Dim obj As New RemoteAPI.ServiceClass() (with this AsyncRemoteAPIclient.exe.config) <system.runtime.remoting> <application> <client> <wellknown type="RemoteAPI.ServiceClass, ServiceClass" url="http://localhost:8080/ServiceClass.rem" /> </client> <channels> <channel ref="http" port="0" /> </channels> </application> But when I replace the config file with manual configuration with the below code, it stops working, and starts complaining "ServiceClass or dependency not found". Dim chan As New System.Runtime.Remoting.Channels.Http.HttpChannel(0) System.Runtime.Remoting.Channels.ChannelServices.RegisterChannel(chan, False) ' Create an instance of the remote object Dim obj As RemoteAPI.ServiceClass obj = CType(Activator.GetObject(GetType(RemoteAPI.ServiceClass), "http://localhost:8080/ServiceClass.rem"), RemoteAPI.ServiceClass) What's wrong ?

    Read the article

  • Visual Editor vs Manual code

    - by Albinoswordfish
    I'm not sure how it is using other frameworks but this questions is strictly regarding Java swing. Is it better to use a Visual Editor to place objects or to manually code the placement of the objects onto the frame (Layout managers or null layouts)? From my experience I've had a lot of trouble using Visual editors when it comes to different screen resolutions or changing the window size. Using manual code to place objects I've found that my GUIs behave a lot better with regard to the screen size issue. However when I want to change a small part of my GUI it takes a lot more work compared to using a visual editor Just wondering what people's thoughts were on this?

    Read the article

  • Can I use Visual Studio's testing facilities in native code?

    - by Billy ONeal
    Is it possible to use Visual Studio's testing system with native code? I have no objection to recompiling the code itself under C++/CLI if it's possible the code can be recompiled without changes -- but the production code shipped has to be native code. The Premium Edition comes with code coverage support which I might be able to get cheaply from my University -- but I can get the Professional Edition for free from DreamSpark -- and that's the only thing I can see that I'd use. (But I'd use it a LOT)

    Read the article

  • How to do manual DI with deep object graphs and many dependencies properly

    - by Fabian
    I believe this questions has been asked in some or the other way but i'm not getting it yet. We do a GWT project and my project leader disallowed to use GIN/Guice as an DI framework (new programmers are not going to understand it, he argued) so I try to do the DI manually. Now I have a problem with deep object graphs. The object hierarchy from the UI looks like this: AppPresenter-DashboardPresenter-GadgetPresenter-GadgetConfigPresenter The GadgetConfigPresenter way down the object hierarchy tree has a few dependencies like CustomerRepository, ProjectRepository, MandatorRepository, etc. So the GadgetPresenter which creates the GadgetConfigPresenter also has these dependencies and so on, up to the entry point of the app which creates the AppPresenter. Is this the way manual DI is supposed to work? doesn't this mean that I create all dependencies at boot time even I don't need them? would a DI framework like GIN/Guice help me here?

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >