Search Results

Search found 23949 results on 958 pages for 'test'.

Page 39/958 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Java test framework for Selenium RC

    - by sebstein.hpfsc.de
    I'm going to use Selenium RC to replay some tests for a website. I want to kickoff those tests from a Java test framework so that I get nice reports how many tests failed, etc. Which java test framework should I use? Is JUnit the preferred framework for this purpose?

    Read the article

  • How to test chrome extensions?

    - by swampsjohn
    Is there a good way to do this? I'm writing an extension that interacts with a website as a content script and saves data using localstorage. Are there any tools, frameworks, etc. that I can use to test this behavior? I realize there are some generic tools for testing javascript, but are those sufficiently power to test an extension? Unit testing is most important, but I'm also interested in other types of testing (such as integration testing).

    Read the article

  • how to test rails js and ajax without a browser

    - by user1679052
    when i use rspec with capybara to test my rails js page , I got the following error: "Selenium::WebDriver::Error::WebDriverError: Could not find Firefox binary (os=linux). " Actually my rails script are all written on the linux server, where there is on brower installed, and any desktop software is not supported on the server (since no X11 is installed). How can I test js in this situation. Or is there and brower that works without X11 installed like wget? Thanks.

    Read the article

  • Does Test Driven Development (TDD) improve Quality and Correctness? (Part 1)

    - by David V. Corbin
    Since the dawn of the computer age, various methodologies have been introduced to improve quality and reduce cost. In this posting, I will by sharing my experiences with Test Driven Development; both its benefits and limitations. To start this topic, we need to agree on what TDD is. The first is to define each of the three words as used in this context. Test - An item or action which measures something in some quantifiable form. Driven - The primary motivation or focus of a series of activities (process) Development - All phases of a software project/product from concept through delivery. The above are very simple definitions that result in the following: "TDD is a process where the primary focus is on measuring and quantifying all aspects of the creation of a (software) product." There are many places where TDD is used outside of software development, even though it is not known by this name. Consider the (conventional) education process that most of us grew up on. The focus was to get the best grades as measured by different tests. Many of these tests measured rote memorization and not understanding of the subject matter. The result of this that many people graduated with high scores but without "quality and correctness" in their ability to utilize the subject matter (of course, the flip side is true where certain people DID understand the material but were not very good at taking this type of test). Returning to software development, let us look at some common scenarios. While these items are generally applicable regardless of platform, language and tools; the remainder of this post will utilize Microsoft Visual Studio and Team Foundation Server (TFS) for examples. It should be realized that everyone does at least some aspect of TDD. At the most rudimentary level, getting a program to compile involves a "pass/fail" measurement (is the syntax valid) that drives their ability to proceed further (run the program). Other developers may create "Unit Tests" in the belief that having a test for every method/property of a class and good code coverage is the goal of TDD. These items may be helpful and even important, but really only address a small aspect of the overall effort. To see TDD in a bigger view, lets identify the various activities that are part of the Software Development LifeCycle. These are going to be presented in a Waterfall style for simplicity, but each item also occurs within Iterative methodologies such as Agile/Scrum. the key ones here are: Requirements Gathering Architecture Design Implementation Quality Assurance Can each of these items be subjected to a process which establishes metrics (quantified metrics) that reflect both the quality and correctness of each item? It should be clear that conventional Unit Tests do not apply to all of these items; at best they can verify that a local aspect (e.g. a Class/Method) of implementation matches the (test writers perspective of) the appropriate design document. So what can we do? For each of area, the goal is to create tests that are quantifiable and durable. The ability to quantify the measurements (beyond a simple pass/fail) is critical to tracking progress(eventually measuring the level of success that has been achieved) and for providing clear information on what items need to be addressed (along with the appropriate time to address them - in varying levels of detail) . Durability is important so that the test can be reapplied (ideally in an automated fashion) over the entire cycle. Returning for a moment back to our "education example", one must also be careful of how the tests are organized and how the measurements are taken. If a test is in a multiple choice format, there is a significant statistical probability that a correct answer might be the result of a random guess. Also, in many situations, having the student simply provide a final answer can obscure many important elements. For example, on a math test, having the student simply provide a numeric answer (rather than showing the methodology) may result in a complete mismatch between the process and the result. It is hard to determine which is worse: The student who makes a simple arithmetric error at one step of a long process (resulting in a wrong answer) or The student who (without providing the "workflow") uses a completely invalid approach, yet still comes up with the right number. The "Wrong Process"/"Right Answer" is probably the single biggest problem in software development. Even very simple items can suffer from this. As an example consider the following code for a "straight line" calculation....Is it correct? (for Integral Points)         int Solve(int m, int b, int x) { return m * x + b; }   Most people would respond "Yes". But let's take the question one step further... Is it correct for all possible values of m,b,x??? (no fair if you cheated by being focused on the bolded text!)  Without additional information regarding constrains on "the possible values of m,b,x" the answer must be NO, there is the risk of overflow/wraparound that will produce an incorrect result! To properly answer this question (i.e. Test the Code), one MUST be able to backtrack from the implementation through the design, and architecture all the way back to the requirements. And the requirement itself must be tested against the stakeholder(s). It is only when the bounding conditions are defined that it is possible to determine if the code is "Correct" and has "Quality". Yet, how many of us (myself included) have written such code without even thinking about it. In many canses we (think we) "know" what the bounds are, and that the code will be correct. As we all know, requirements change, "code reuse" causes implementations to be applied to different scenarios, etc. This leads directly to the types of system failures that plague so many projects. This approach to TDD is much more holistic than ones which start by focusing on the details. The fundamental concepts still apply: Each item should be tested. The test should be defined/implemented before (or concurrent with) the definition/implementation of the actual item. We also add concepts that expand the scope and alter the style by recognizing: There are many things beside "lines of code" that benefit from testing (measuring/evaluating in a formal way) Correctness and Quality can not be solely measured by "correct results" In the future parts, we will examine in greater detail some of the techniques that can be applied to each of these areas....

    Read the article

  • Python bindings for a vala library

    - by celil
    I am trying to create python bindings to a vala library using the following IBM tutorial as a reference. My initial directory has the following two files: test.vala using GLib; namespace Test { public class Test : Object { public int sum(int x, int y) { return x + y; } } } test.override %% headers #include <Python.h> #include "pygobject.h" #include "test.h" %% modulename test %% import gobject.GObject as PyGObject_Type %% ignore-glob *_get_type %% and try to build the python module source test_wrap.c using the following code build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c However, the last command fails with an error $ ./build.sh Traceback (most recent call last): File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1720, in <module> sys.exit(main(sys.argv)) File "/usr/share/pygobject/2.0/codegen/codegen.py", line 1672, in main o = override.Overrides(arg) File "/usr/share/pygobject/2.0/codegen/override.py", line 52, in __init__ self.handle_file(filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 84, in handle_file self.__parse_override(buf, startline, filename) File "/usr/share/pygobject/2.0/codegen/override.py", line 96, in __parse_override command = words[0] IndexError: list index out of range Is this a bug in pygobject, or is something wrong with my setup? What is the best way to call code written in vala from python? EDIT: Removing the extra line fixed the current problem, but now as I proceed to build the python module, I am facing another problem. Adding the following C file to the existing two in the directory: test_module.c #include <Python.h> void test_register_classes (PyObject *d); extern PyMethodDef test_functions[]; DL_EXPORT(void) inittest(void) { PyObject *m, *d; init_pygobject(); m = Py_InitModule("test", test_functions); d = PyModule_GetDict(m); test_register_classes(d); if (PyErr_Occurred ()) { Py_FatalError ("can't initialise module test"); } } and building with the following script build.sh #/usr/bin/env bash valac test.vala -CH test.h python /usr/share/pygobject/2.0/codegen/h2def.py test.h > test.defs pygobject-codegen-2.0 -o test.override -p test test.defs > test_wrap.c CFLAGS="`pkg-config --cflags pygobject-2.0` -I/usr/include/python2.6/ -I." LDFLAGS="`pkg-config --libs pygobject-2.0`" gcc $CFLAGS -fPIC -c test.c gcc $CFLAGS -fPIC -c test_wrap.c gcc $CFLAGS -fPIC -c test_module.c gcc $LDFLAGS -shared test.o test_wrap.o test_module.o -o test.so python -c 'import test; exit()' results in an error: $ ./build.sh ***INFO*** The coverage of global functions is 100.00% (1/1) ***INFO*** The coverage of methods is 100.00% (1/1) ***INFO*** There are no declared virtual proxies. ***INFO*** There are no declared virtual accessors. ***INFO*** There are no declared interface proxies. Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: ./test.so: undefined symbol: init_pygobject Where is the init_pygobject symbol defined? What have I missed linking to?

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • MSTest VS2010 - DeploymentItem copying files to different locations on different machines

    - by Jack
    I have found that DeploymentItem [TestClass(), DeploymentItem(@"TestData\")] is not copying my test data files to the same location when tests are built and run on different machines. The test data files are copied to the "bin\debug" directory in the test project on my machine, but on my friend's machine they are copied to "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out". The bin\debug directory on my machine can be obtained with the code: string appDirectory = Path.GetDirectoryNameSystem.Reflection.Assembly.GetExecutingAssembly().Location; and the same code will return "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" on my friends PC. This however isn't really the problem. The problem is that the test data files I have made have a folder structure, and this folder structure is only maintained on my machine when copied to bin\debug, whereas on my friends machine only the files are added to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" directory. This means that tests will pass on my machine and fail on his! Is there a way to ensure that DeploymentItem always copys to the bin\debug folder? Or a way to ensure that the folder structure will be retained when DeploymentItem copies the files to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" folder?

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • Postfix/ClamAV not stopping viruses under Virtualmin

    - by Josh
    I am using Virtualmin and have it set up to have Postfix scan incoming emails with ClamAV (using clamdscan) and delete any emails which contain a virus. However when I email myself the EICAR test string, it comes through just fine. I know ClamAV will report this file as a virus. How can I troubleshoot this / what could be wrong?

    Read the article

  • How to test an HTTP 301 redirect?

    - by NoozNooz42
    How can one easily test HTTP return codes, like, say, a 301 redirect? For example, if I want to "see what's going on", I can use telnet to do something like this: ... $ telnet nytimes.com 80 Trying 199.239.136.200... Connected to nytimes.com. Escape character is '^]'. GET / HTTP/1.0 (enter) (enter) HTTP/1.1 200 OK Server: Sun-ONE-Web-Server/6.1 Date: Mon, 14 Jun 2010 12:18:04 GMT Content-type: text/html Set-cookie: RMID=007af83f42dd4c161dfcce7d; expires=Tuesday, 14-Jun-2011 12:18:04 GMT; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Set-cookie: adxcs=-; path=/; domain=.nytimes.com Expires: Thu, 01 Dec 1994 16:00:00 GMT Cache-control: no-cache Pragma: no-cache Connection: close <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> ... Which is an easy way to access quite some infos. But now I want to test that a 301 redirect is indeed a 301 redirect. How can I do so? Basically, instead of getting a HTTP/1.1 200 OK I'd like to know how I can get the 301? I know that I can enter the name of the URL in a browser and "see" that I'm redirected, but I'd like to know what tool(s) can be used to actually really "see" the 301 redirect. Btw, I did test with a telnet, but when I enter www.example.org, which I redirected to example.org (without the www), all I can see is an "200 OK", I don't get to see the 301.

    Read the article

  • Creating Dependencies Only to be able to Unit Test

    - by arin
    I just created a Manager that deals with a SuperClass that is extended all over the code base and registered with some sort of SuperClassManager (SCM). Now I would like to test my Manager that is aware of only the SuperClass. I tried to create a concrete SCM, however, that depends on a third party library and therefore I failed to do that in my jUnit test. Now the option is to mock all instances of this SCM. All is good until now, however, when my Manager deals with the SCM, it returns children of the SuperClass that my Manager does not know or care about. Nevertheless, the identities of these children are vital for my tests (for equality, etc.). Since I cannot use the concrete SCM, I have to mock the results of calls to the appropriate functions of the SCM, however, this means that my tests and therefore my Manager need to know and care about the children of the SuperClass. Checking the code base, there does not seem to be a more appropriate location for my test (that already maintains the appropriate real dependencies). Is it worth it to introduce unnecessary dependencies for the sake of unit testing?

    Read the article

  • Using Moq callbacks correctly according to AAA

    - by Hadi Eskandari
    I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking. to be able to verify bounded object in the ViewModel is converted properly, I've used a callback: [Test] public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter() { var vm = CreateNewCampaignViewModel(); var proposal = CreateNewProposal(1, "New Proposal"); Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) => { Assert.That(p.plainProposal, Is.Not.Null); Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1)); Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal")); }); proposal.State = ObjectStates.Added; vm.CurrentProposal = proposal; vm.Save(); } It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?

    Read the article

  • How to unit test this simple ASP.NET MVC controller

    - by Frank Schwieterman
    Lets say I have a simple controller for ASP.NET MVC I want to test. I want to test that a controller action (Foo, in this case) simply returns a link to another action (Bar, in this case). How would you test this? (either the first or second link) My implementation has the same link twice. One passes the url throw ViewData[]. This seems more testable to me, as I can check the ViewData collection returned from Foo(). Even this way though, I don't know how to validate the url itself without making dependencies on routing. The controller: public class TestController : Controller { public ActionResult Foo() { ViewData["Link2"] = Url.Action("Bar"); return View("Foo"); } public ActionResult Bar() { return View("Bar"); } } the "Foo" view: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" MasterPageFile="~/Views/Shared/Site.Master"%> <asp:Content ContentPlaceHolderID="MainContent" runat="server"> <%= Html.ActionLink("link 1", "Bar") %> <a href="<%= ViewData["Link2"]%>">link 2</a> </asp:Content>

    Read the article

  • Test multiple domains using ASP.NET development server

    - by Pete Lunenfeld
    I am developing a single web application that will dynamically change its content depending on which domain name is used to reach the site. Multiple domains will point to the same application. I wish to use the following code (or something close) to detect the domain name and perform the customizations: string theDomainName = Request.Url.Host; switch (theDomainName) { case "www.clientone.com": // do stuff break; case "www.clienttwo.com": // do other stuff break; } I would like to test the functionality of the above using the ASP.NET development server. I created mappings in the local HOSTS file to map www.clientone.com to 127.0.0.1, and www.clienttwo.com to 127.0.0.1. I then browse to the application with the browser using www.clinetone.com (etc). When I try to test this code using the ASP.net development server the URL always says localhost. It does NOT capture the host entered in the browser, only localhost. Is there a way to test the URL detection functionality using the development server? Thanks.

    Read the article

  • How should I unit test a code-generator?

    - by jkp
    This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share?

    Read the article

  • Maven test dependency in multi module project

    - by user209947
    I use maven to build a multi module project. My module 2 depends on Module 1 src at compile scope and module 1 tests in test scope. Module 2 - <dependency> <groupId>blah</groupId> <artifactId>MODULE1</artifactId> <version>blah</version> <classifier>tests</classifier> <scope>test</scope> </dependency> This works fine. Say my module 3 depends on Module1 src and tests at compile time. Module 3 - <dependency> <groupId>blah</groupId> <artifactId>MODULE1</artifactId> <version>blah</version> <classifier>tests</classifier> <scope>complie</scope> </dependency> When I run mvn clean install, my build runs till module 3, fails at module 3 as it couldnt resolve the module 1 test dependency. Then I do a mvn install on module 3 alone, go back and run mvn install on my parent pom to make it build. How can i fix this?

    Read the article

  • Unit testing with Mocks. Test behaviour not implementation

    - by Kenny Eliasson
    Hi.. I always had a problem when unit testing classes that calls other classes, for example I have a class that creates a new user from a phone-number then saves it to the database and sends a SMS to the number provided. Like the code provided below. public class UserRegistrationProcess : IUserRegistration { private readonly IRepository _repository; private readonly ISmsService _smsService; public UserRegistrationProcess(IRepository repository, ISmsService smsService) { _repository = repository; _smsService = smsService; } public void Register(string phone) { var user = new User(phone); _repository.Save(user); _smsService.Send(phone, "Welcome", "Message!"); } } It is a really simple class but how would you go about and test it? At the moment im using Mocks but I dont really like it [Test] public void WhenRegistreringANewUser_TheNewUserIsSavedToTheDatabase() { var repository = new Mock<IRepository>(); var smsService = new Mock<ISmsService>(); var userRegistration = new UserRegistrationProcess(repository.Object, smsService.Object); var phone = "0768524440"; userRegistration.Register(phone); repository.Verify(x => x.Save(It.Is<User>(user => user.Phone == phone)), Times.Once()); } [Test] public void WhenRegistreringANewUser_ItWillSendANewSms() { var repository = new Mock<IRepository>(); var smsService = new Mock<ISmsService>(); var userRegistration = new UserRegistrationProcess(repository.Object, smsService.Object); var phone = "0768524440"; userRegistration.Register(phone); smsService.Verify(x => x.Send(phone, It.IsAny<string>(), It.IsAny<string>()), Times.Once()); } It feels like I am testing the wrong thing here? Any thoughts on how to make this better?

    Read the article

  • Why PHPUnit test doesn't fail?

    - by JohnM2
    I have a test method which looks like that: $row = $this->GetRowFromUserTable($id); $this->asserLessThan(time(), time($row['last_update'])); When $row is null access to $row['last_update'] should trigger a NOTICE and this test should fail, but it doesn't. This code fails on first assert, so I know $db_row is null (fixture is the same): $row = $this->GetRowFromUserTable($id); $this->asserNotNull($row); $this->asserLessThan(time(), time($row['last_update'])); When I write this: $row = $this->GetRowFromUserTable($id); $this->assertEquals(E_ALL, error_reporting()); $this->asserLessThan(time(), time($row['last_update'])); it successes, so I am also sure that error_reproting is right and this situation must have something to do with PHPUnit. I use PHPUnit 3.5.6 I read this question: Can I make PHPUnit fail if the code throws a notice? but the answer suggests to use newer version of PHPUnit, but that answer is from 2009, so it's not it. EDIT: I use NetBeans IDE 6.9.1 to run my test.

    Read the article

  • Rails rake test returns an error message

    - by eakkas
    I am a rails newbie and receive the following message when I run rake test. This is a an application based on rails community engine. I tried creating a test application just to make sure that my gems etc. are fine and I am able to run rake test successfully in that application. It would be great if someone could shed a light on what is going wrong... /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/whiny_nil.rb:52:in `method_missing': undefined method `merge' for nil:NilClass (NoMethodError) from /home/eakkas/NetBeansProjects/hello_ce/vendor/plugins/community_engine/app/controllers/users_controller.rb:17 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in `require_without_desert' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:8:in `require' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:32:in `__each_matching_file' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/ruby/object.rb:7:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:27:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `each' from /usr/lib/ruby/gems/1.8/gems/desert-0.5.3/lib/desert/rails/dependencies.rb:26:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run'

    Read the article

  • How to test if a doctrine records has any relations that are used

    - by murze
    Hi, I'm using a doctrine table that has several optional relations (of types Doctrine_Relation_Association and Doctrine_Relation_ForeignKey) with other tables. How can I test if a record from that table has connections with records from the related table. Here is an example to make my question more clear. Assume that you have a User and a user has a many to many relation with Usergroups and a User can have one Userrole How can I test if a give user is part of any Usergroups or has a role. The solution starts I believe with $relations = Doctrine_Core::getTable('User')->getRelations(); $user = Doctrine_Core::getTable('User')->findOne(1); foreach($relations as $relation) { //here should go a test if the user has a related record for this relation if ($relation instanceof Doctrine_Relation_Association) { //here the related table probably has more then one foreign key (ex. user_id and group_id) } if ($relation instanceof Doctrine_Relation_ForeignKey) { //here the related table probably has the primary key of this table (id) as a foreign key (user_id) } } //true or false echo $result I'm looking for a general solution that will work no matter how many relations there are between user and other tables. Thanks!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >