Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 38/328 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How to read a technical book to remember most of it?

    - by LikeToCode
    There are many technical books that become thinker and thicker and the pressure from the technical society is more and more to read them and remember many concepts described in them. But it's so hard to do it. I have only a few hour a week to read them and when I reach the middle of the book I forgot most of what I've read at the beginning, because lots of time passed. If I go back, I'll just re-read the same stuff over and over, never going past 50% of the book. So, even after I finished reading a thick tech book I forgot most of it. How do you remember all that stuff? I had a talk in my University on how to do it. They advised to use some cards with basic info so that you can remember the details when you look at it. Do you use such cards or do you have other ideas how to do it?

    Read the article

  • How to make technical training session useful and successful for trainee?

    - by metal-gear-solid
    Are these suggestions good to give for a successful training session? Practice time should be always given immediate after technical training? usually after receiving any technical session about any new thing we do routine work. If we don't do practice just after training, later when we do any work related to that training then we feel we need training again. So if we are getting training today and will not use it for some period of time (15 -30 -60 days) then the training is of no use, as it is at the wrong time. I.e. We will forget many things Any other suggestions which i should give? I'm trainee not trainer. What suggestion should i give to trainer/organizer?

    Read the article

  • Entretiens d'embauche : quelles questions et quels tests vous ont le plus marqué ? Et ceux que l'on vous a demandés le plus souvent ?

    Entretiens d'embauche : quelles questions et quels tests vous ont le plus marqué ? Et ceux qu'on vous a demandés le plus souvent ? Un développeur C/C++ américain a tenté de compiler tous les tests et toutes les questions auxquels il a eu droit au cours de 9 entretiens d'embauche dans 5 entreprises de tailles différentes. La liste qui en résulte a de quoi surprendre tant elle est centrée sur la pratique et la résolution de problèmes concrets, notamment de bas niveau : manipulation des chaînes de caractères, listes, tableaux, etc... Même les questions d'ordres théoriques et de conception en surprendraient plus d'un. Exemple : pour concevoir un jeu d'échec, quels objet...

    Read the article

  • Too much memory consumed during TFS automated build

    - by Bernard Chen
    We're running TFS 2010 Standard Edition, and we've set up an automated build to run whenever someone checks in code. We run through all of the automated tests (built with MSTest) as part of the build. We've configured the build to run the tests as a 64-bit process, but the QTAgent.exe that runs the tests grows in memory while the tests are running. It is currently reaching 8GB for the ~650 tests we have, and the process has slowed significantly when we went from 450 tests to 650 tests. When we run all of the tests in the local development environment, memory seems to be freed at least with each TestClass and never exceeds a certain level. The process of running all tests has not increased significantly in the local development environment. Is there a way to configure the build service to free up memory with each Test or each TestClass? With the way things are currently running, the build process gets very slow when we start to run out of memory on the machine. Edit: I found the MSTest invocation in the build log and ran it manually and saw the same behavior of runaway memory. I removed the /publish, /publishbuild, /teamproject, /platform, and /flavor parameters from the invocation of MSTest, in case the test runner was holding onto results until the end, but the behavior didn't change. I ran the same command line on a dev box, separate from the build server, and the memory freed up frequently. It seems there must be something wrong/different about the build server that is causing it to behave different, but I'm stumped where to look. I've looked at qtagent.exe.config, mstest.exe.config, versions of both executables. What else might affect this?

    Read the article

  • Tests unitaires d'un DomainService WCF RIA : pattern Repository, un tutoriel de Kyle McClellan, traduit par Deepin Prayag

    Citation: Étant donné que WCF RIA Services emploie un « pipeline pattern » pour invoquer vos opérations DomainService, il n'est pas toujours évident de savoir comment les tester. Dans cette série d'articles nous allons voir un petit DomainService et comment le tester. Entre autres nous allons voir comment implémenter une IDomainServiceFactory personnalisée, comment implémenter le pattern Repository, et comment utiliser la DomainServiceTestHos...

    Read the article

  • Should testers approve releases, or just report on tests?

    - by Ernest Friedman-Hill
    Does it make sense to give signoff authority to testers? Should a test team Just test features, issues, etc, and simply report on a pass/fail basis, leaving it up to others to act on those results, or Have authority to hold up releases themselves based on those results? In other words, should testers be required to actually sign off on releases? The testing team I'm working with feels that they do, and we're having an issue with this because of "testing scope creep" -- the refusal to approve releases is sometimes based on issues explicitly not addressed by the release in question.

    Read the article

  • Do we set the bar too high by requiring that code tests not suffer from buffer overflow?

    - by brice
    We are currently recruiting for a Junior Developer position working mainly in C on Linux. As part of the process, we require candidates to complete a code test at their leisure in C. So far we have rejected two candidates on the basis that their code, although readable and in one case rather idiomatic, suffered from buffer overflow errors due to unbounded buffer writes. Are buffer overflows acceptable from a graduate developer? Are we setting the bar too high? What is the expected capability of graduate/Junior engineers? [Edit]: We explicitly ask for error-checked, production quality code. We provide a test & build framework for the candidates

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Samsung a-t-il triché sur les benchmarks pour le Galaxy S4 ? Les performances de l'Exynos 5 remises en cause après des tests

    Samsung : les Galaxy S4 affichent des performances élevées pour les benchmarks Mais moindres pour les autres applicationsTout a débuté avec le post d'un utilisateur mécontent sur le site Beyond3d après qu'il ait réalisé des bancs d'essai pour le GPU de son Galaxy S4. Pendant les essais, l'utilisateur constate que le GPU de son smartphone tourne à 532 Mhz. Cependant, pour toutes les autres applications y compris les jeux (aux résolutions les plus poussées) ce dernier ne tournerait plus qu'à 480 Mhz. « Oh Samsung honte à toi ! » écrira-t-il.Par la suite, Brian Klug et Anand Lal Shimpi, du site web Anandtech, reprennent eux aussi l'expérience de cet utilisateur avec leurs propres smartphones pour en confirmer...

    Read the article

  • Do you do custom tests/work for a potential employer before the interview?

    - by Chuck Stephanski
    Every shop I've worked at we have followed what I thought was a standard hiring sequence: 1. solicit resumes 2. phone screen applicants we are interested in 3. in person interview 4. in some shops 2nd in person interview (just w/ CEO for example) Most places I apply to follow something along these lines. But some shops want to give me a test or ask me to build something after I send in my resume. They usually say "congratulations, you've made it to stage 2 of our hiring process!" But for all I know they are asking every applicant they get a resume from to do the same thing. This annoys me because it doesn't scale. If I'm applying for a lot of positions I can't spend all my time doing work that potentially can only land me one job. Plus there's no shared dedication to the process. If we do a 45 minute phone screen, that's 45 minutes the company commits to me and 45 minutes I commit to the company in hopes of a potential match. Do you guys have a policy regarding these sorts of things?

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

  • Do you use unit tests at work? What benefits do you get from them?

    - by Anonymous
    I had planned to study and apply unit testing to my code, but after talking with my colleagues, some of them suggested to me that it's not necessary and it has a very little benefit. They also claim that only a few companies actually do unit testing with production software. I am curious how people have applied unit testing at work and what benefits they are getting from using them, e.g., better code quality, reduced development time in the long term, etc.

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • How do you tell that your unit tests are correct?

    - by Jacob Adams
    I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem? Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing. Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • hudson.util.ProcessTreeTest test error

    - by senzacionale
    error: Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec Running hudson.util.ProcessTreeTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! Running hudson.model.LoadStatisticsTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.089 sec Running hudson.util.ArgumentListBuilderTest Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec Running hudson.util.RobustReflectionConverterTest Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec Running hudson.util.VersionNumberTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.074 sec Running hudson.util.CyclicGraphDetectorTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec Results : Tests in error: testRemoting(hudson.util.ProcessTreeTest) Tests run: 102, Failures: 0, Errors: 1, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to D:\PROJEKTI\Maven\hudson\main\core\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 58 seconds [INFO] Finished at: Fri Jun 11 21:04:46 CEST 2010 [INFO] Final Memory: 85M/152M [INFO] ------------------------------------------------------------------------ error log: ------------------------------------------------------------------------------- Test set: hudson.util.ProcessTreeTest ------------------------------------------------------------------------------- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.181 sec <<< FAILURE! testRemoting(hudson.util.ProcessTreeTest) Time elapsed: 0.169 sec <<< ERROR! org.jvnet.winp.WinpException: Failed to read environment variable table error=299 at .\envvar-cmdline.cpp:114 at org.jvnet.winp.Native.getCmdLineAndEnvVars(Native Method) at org.jvnet.winp.WinProcess.parseCmdLineAndEnvVars(WinProcess.java:114) at org.jvnet.winp.WinProcess.getEnvironmentVariables(WinProcess.java:109) at hudson.util.ProcessTree$Windows$1.getEnvironmentVariables(ProcessTree.java:419) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:274) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:255) at hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:215) at hudson.remoting.UserRequest.perform(UserRequest.java:114) at hudson.remoting.UserRequest.perform(UserRequest.java:48) at hudson.remoting.Request$2.run(Request.java:270) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) does anyone have any idea what can be wrong in test? Regards

    Read the article

  • Excluding directories in Exuberant CTags

    - by DeepYellow
    I'm working with a very large code base and I find it useful to be selective about which directories are included for use with Exuberant Ctags. The --exclude option works well to eliminate individual file and directory names (with globing wildcards), but I can't figure out how to get it to exclude path patterns containing more than one directory. For example, I may want to exclude a directory tests, but only when processing thirdparty\tests (under Windows). The problem is if I just use --exclude=tests I exclude too many directories, including a test directory in the code I'm actively working on. Here are some things I've tried: --exclude=thirdparty\tests --exclude=thirdparty\\tests --exclude=*\thirdparty\tests --exclude=*\\thirdparty\\tests --exclude=thirdparty/tests Ctags silently ignores all these as evidenced by an examination of the tags file. How can I exclude a directory only when it is preceded by a given parent directory?

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by joycsharp
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

  • Getting Started with Employee Info Starter Kit (v4.0.0)

    - by Mohammad Ashraful Alam
    The new release of Employee Info Starter Kit contains lots of exciting features available in Visual Studio 2010 and .NET 4.0. To get started with the new version, you will need less than 5 minutes. Minimum System Requirements Before getting started, please make sure you have installed Visual Studio 2010 RC (or higher) and Sql Server 2005 Express edition (or higher installed on your machine. Running the Starter Kit for First Time 1. Download the starter kit 4.0.0 version form here and extract it. 2. Go to <extraction folder>\Source\Eisk.Solution and click the solution file 3. From the solution explorer, right click the “Eisk.Web” web site project node and select “Set as Startup Project” and hit Ctrl + F5   4. You will be prompted to install database, just follow the instruction. That’s it! You are ready to use this starter kit. Running the Tests Employee Info Starter Kit contains a infrastructure for Integration and Unit Testing, by utilizing cool test tools in Visual Studio 2010. Once you complete the steps, mentioned above, take a minute to run the test cases on the fly. 1. From the solution explorer, to go “Solution Items\e-i-s-k-2010.vsmdi” and click it. You will see the available Tests in the Visual Studio Test Lists. Select all, except the “Load Tests” node (since Load Tests takes a bit time) 2. Click “Run Checked Tests” control from the upper left corner. You will see the tests running and finally the status of the tests, which indicates the current health of you application from different scenarios. Technorati Tags: asp.net,architecture,starter kit,employee info starter kit,visual studio 2010,.net 4.0,entity framework

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >