Search Results

Search found 32072 results on 1283 pages for 'catch unit test'.

Page 103/1283 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Unittest and mock

    - by user1410756
    I'm testing with unittest in python and it's ok. Now, I have introduced mock and I need to resolve a question. This is my code: from mock import Mock import unittest class Matematica(object): def __init__(self, op1, op2): self.op1 = op1 self.op2 = op2 def adder(self): return self.op1 + self.op2 def subs(self): return abs(self.op1 - self.op2) def molt(self): return self.op1 * self.op2 def divid(self): return self.op1 / self.op2 class TestMatematica(unittest.TestCase): """Test della classe Matematica""" def testing(self): """Somma""" mat = Matematica(10,20) self.assertEqual(mat.adder(),30) """Sottrazione""" self.assertEqual(mat.subs(),10) class test_mock(object): def __init__(self, matematica): self.matematica = matematica def execute(self): self.matematica.adder() self.matematica.adder() self.matematica.subs() if __name__ == "__main__": result = unittest.TextTestRunner(verbosity=2).run(TestMatematica('testing')) a = Matematica(10,20) b = test_mock(a) b.execute() mock_foo = Mock(b.execute)#return_value = 'rafa') mock_foo() print mock_foo.called print mock_foo.call_count print mock_foo.method_calls This code is functionally and result of print is: True, 1, [] . Now, I need to count how many times are called self.matematica.adder() and self.matematica.subs() . THANKS

    Read the article

  • Best way to do TDD in express versions of visual studio(eg VB Express)

    - by Nathan W
    I have been looking in to doing some test driven development for one of the applications that I'm currently writing(OLE wrapper for an OLE object). The only problem is that I am using the express versions of Visual Studio(for now), at the moment I am using VB express but sometimes I use C# express. Is it possible to do TDD in the express versions? If so what are the bast was to go about it? Cheers. EDIT. By the looks of things I will have to buy the full visual studio so that I can do integrated TDD, hopefully there is money in the budget to buy a copy :). For now I think I will use Nunit like everyone is saying.

    Read the article

  • where is "create instance" menu in visual studio 2010?

    - by austin powers
    Hi, in visual studio 2008 there is a sub-menu called "create instance" which is resides in class designer. Today I've opened VS.net 2010 and then opened class designer and create my class over there and when I wanted to test my class with the help of "create instance" option there was no such option available in vs.net 2010. and I've googled about it a little bit but no answer at all so I decided to mention about it here. where can I find this menu in vs.net 2010? regards.

    Read the article

  • Hibernate Unit tests - Reset schema

    - by Marco
    Hi, I'm testing the CRUD operations of my DAOs in JUnit tests. When i execute the single test, Hibernate always resets the schema and populates the DB in a known state. But when i execute multiple tests in a row, Hibernate resets the schema once, and then the data is accumulated during the execution of the tests. This is an unexpected behavior, so I'd like to add in the @Before method of the tests a function that explicitly resets the schema to avoid the pesistence of side data created by previous tests during the execution chain. Any tips? Thanks

    Read the article

  • Unable to locate using find element by link

    - by First Rock
    Newbie in testing. I generated a test case using Selenium, and then exported it as a Python script. Now, when I try to run that in terminal, I get following error: raise exception_class(message, screen, stacktrace) NoSuchElementException: Message: u'Unable to locate element: {"method":"link text","selector":"delete"}' I am using the command generated by Selenium i.e driver.find_element_by_link_text("delete").click() The reason for the error I believe is that the link "delete" in my web page is seen only when I click on a particular line to be deleted. So I guess it is being unable to locate the link. Please suggest what alternative measure could I use to locate and click on the "delete" link. Thanks in Advance:)

    Read the article

  • JUnit Theories: Why can't I use Lists (instead of arrays) as DataPoints?

    - by MatrixFrog
    I've started using the new(ish) JUnit Theories feature for parameterizing tests. If your Theory is set up to take, for example, an Integer argument, the Theories test runner picks up any Integers marked with @DataPoint: @DataPoint public static Integer number = 0; as well as any Integers in arrays: @DataPoints public static Integer[] numbers = {1, 2, 3}; or even methods that return arrays like: @DataPoints public static Integer[] moreNumbers() { return new Integer[] {4, 5, 6};}; but not in Lists. The following does not work: @DataPoints public static List<Integer> numberList = Arrays.asList(7, 8, 9); Am I doing something wrong, or do Lists really not work? Was it a conscious design choice not to allow the use Lists as data points, or is that just a feature that hasn't been implemented yet? Are there plans to implement it in a future version of JUnit?

    Read the article

  • Does Visual Studio run Tests with a less privileged process?

    - by Filip Ekberg
    I have an application that is supposed to read from the Registry and when executing a console application my registry access works perfectly. However when I move it over to a test this returns null: var masterKey = Registry.LocalMachine.OpenSubKey("path_to_my_key"); So my question is: Does Visual Studio run Tests with a less privileged process? I tested to see what user this gave me: var x = WindowsIdentity.GetCurrent().Name; and it gives me the same as in the console application. So I am a bit confused here.

    Read the article

  • Keeping iPhone App private after AppStore approval for beta testing.

    - by Stack
    I have messed around with AD-HOC distribution quite a bit and got it working too. The problem I am facing is all the people who I want to use as beta testers are "normal people" who never even sync their iPhone to iTunes on a computer. So, you can understand how technically challenged these people are, which is fine with me because that is the audience I want to use for testing. All these guys can do for me is if I can give them an AppStore link they will download it on their iPhone and test it for me. So, basically AD-HOC distribution (UDIDs, mobileprovision file and all that crap) is out of question for me. My Question is after AppStore approves my app, is there a way for me to be under the radar so that normal public can not download the app until I am ready. From past experience I know that the moment you put an app out there, in first week you get 100s of downloads and I dont want that to happen until my beta testing is finished.

    Read the article

  • JUnit - assertSame

    - by Michael
    Can someone tell me why assertSame() do fail when I use values 127? import static org.junit.Assert.*; ... @Test public void StationTest1() { .. assertSame(4, 4); // OK assertSame(10, 10); // OK assertSame(100, 100); // OK assertSame(127, 127); // OK assertSame(128, 128); // raises an junit.framework.AssertionFailedError! assertSame(((int) 128),((int) 128)); // also junit.framework.AssertionFailedError! } I'm using JUnit 4.8.1.

    Read the article

  • udp expected behaviour not responding to test result

    - by ernst
    I have a local network topology that is structured as follows: three hosts and a switch in the middle. I am using a switch that supports 10,100,1000 Mbit/s full/half duplex connection. I have configured the hosts with a static ip 172.16.0.1-2-3/25. This is the output of ifconfig eth0 Link encap: Ethernet HWaddr ***** inet addr:172.16.0.3 Bcast:172.16.0.127 Mask:255.255.255.128 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 The output on H1 and H2 is perfectly matchable They are mutually reachable since i have tested the network with ping. I have forced the ethernet interface to work at 10M with ethtool -s eth0 speed 10 duplex full autoneg on this is the output of ethtool eth0 supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full S upported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Full Advertised pause frame use: Symmetric A dvertised auto-negotiation: Yes Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: g Wake-on: d Current message level: 0x000000ff (255) drv probe link timer ifdown ifup rx_err tx_err Link detected: yes – I am doing an experimental test using nttcp to calculate the GOODPUT in the case that H1 and H2 at the same time send data to H3. Since the three links have the same forced capability and the amount of arrving data speed is 10 from H1+10 from H2--20M to H3 it would be expected a bottleneck effect and, due to the non reliable nature of udp, a packet loss. But this doesn't appen since the output of nttcp application shows the same number of byte sended and received. this is the output of nttcp on h3 nttcp -T -r -u 172.16.0.2 & nttcp -T -r -u 172.16.0.1 [1] 4071 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 13.74 0.05 4.8848 1398.0140 2049 149.14 42684.8 Bytes Real s CPU s Real-MBit/s CPU-MBit/s Calls Real-C/s CPU-C/s l 8388608 14.02 0.05 4.7872 1398.0140 2049 146.17 42684.8 1 8388608 13.56 0.06 4.9500 1118.4065 2051 151.28 34181.1 1 8388608 13.89 0.06 4.8310 1198.3084 2051 147.65 36623.0 – How is this possible? Am i missing something? Any help will be gratefully apprecciated, Best regards

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Dependency Injection and Unit of Work pattern

    - by sunwukung
    I have a dilemma. I've used DI (read: factory) to provide core components for a homebrew ORM. The container provides database connections, DAO's,Mappers and their resultant Domain Objects on request. Here's a basic outline of the Mappers and Domain Object classes class Mapper{ public function __constructor($DAO){ $this->DAO = $DAO; } public function load($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; $values = $this->DAO->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); return $Object; } } class User(){ public function setName($string){ $this->name = $string; //mark modified by means fair or foul } } The ORM also contains a class (Monitor) based on the Unit of Work pattern i.e. class Monitor(){ private static array modified; private static array dirty; public function markClean($class); public function markModified($class); } The ORM class itself simply co-ordinates resources extracted from the DI container. So, to instantiate a new User object: $Container = new DI_Container; $ORM = new ORM($Container); $User = $ORM->load('user',1); //at this point the container instantiates a mapper class //and passes a database connection to it via the constructor //the mapper then takes the second argument and loads the user with that id $User->setName('Rumpelstiltskin');//at this point, User must mark itself as "modified" My question is this. At the point when a user sets values on a Domain Object class, I need to mark the class as "dirty" in the Monitor class. I have one of three options as I can see it 1: Pass an instance of the Monitor class to the Domain Object. I noticed this gets marked as recursive in FirePHP - i.e. $this-Monitor-markModified($this) 2: Instantiate the Monitor directly in the Domain Object - does this break DI? 3: Make the Monitor methods static, and call them from inside the Domain Object - this breaks DI too doesn't it? What would be your recommended course of action (other than use an existing ORM, I'm doing this for fun...)

    Read the article

  • Mock Object and Interface

    - by tunl
    I'm a newbie in Unit Test with Mock Object. I use EasyMock. I try to understand this example: import java.io.IOException; public interface ExchangeRate { double getRate(String inputCurrency, String outputCurrency) throws IOException; } import java.io.IOException; public class Currency { private String units; private long amount; private int cents; public Currency(double amount, String code) { this.units = code; setAmount(amount); } private void setAmount(double amount) { this.amount = new Double(amount).longValue(); this.cents = (int) ((amount * 100.0) % 100); } public Currency toEuros(ExchangeRate converter) { if ("EUR".equals(units)) return this; else { double input = amount + cents/100.0; double rate; try { rate = converter.getRate(units, "EUR"); double output = input * rate; return new Currency(output, "EUR"); } catch (IOException ex) { return null; } } } public boolean equals(Object o) { if (o instanceof Currency) { Currency other = (Currency) o; return this.units.equals(other.units) && this.amount == other.amount && this.cents == other.cents; } return false; } public String toString() { return amount + "." + Math.abs(cents) + " " + units; } } import junit.framework.TestCase; import org.easymock.EasyMock; import java.io.IOException; public class CurrencyTest extends TestCase { public void testToEuros() throws IOException { Currency testObject = new Currency(2.50, "USD"); Currency expected = new Currency(3.75, "EUR"); ExchangeRate mock = EasyMock.createMock(ExchangeRate.class); EasyMock.expect(mock.getRate("USD", "EUR")).andReturn(1.5); EasyMock.replay(mock); Currency actual = testObject.toEuros(mock); assertEquals(expected, actual); } } So, i wonder how to Currency use ExchangeRate in toEuros(..) method. rate = converter.getRate(units, "EUR"); The behavior of getRate(..) method is not specified because ExchangeRate is an interface.

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • Any suggestions for improvement on this style for BDD/TDD?

    - by Sean B
    I was tinkering with doing the setups with our unit test specifciations which go like Specification for SUT when behaviour X happens in scenario Y Given that this thing And also this other thing When I do X... Then It should do ... And It should also do ... I wrapped each of the steps of the GivenThat in Actions... any feed back whether separating with Actions is good / bad / or better way to make the GivenThat clear? /// <summary> /// Given a product is setup for injection /// And Product Image Factory Is Stubbed(); /// And Product Size Is Stubbed(); /// And Drawing Scale Is Stubbed(); /// And Product Type Is Stubbed(); /// </summary> protected override void GivenThat() { base.GivenThat(); Action givenThatAProductIsSetupforInjection = () => { var randomGenerator = new RandomGenerator(); this.Position = randomGenerator.Generate<Point>(); this.Product = new Diffuser { Size = new RectangularProductSize( 2.Inches()), Position = this.Position, ProductType = Dep<IProductType>() }; }; Action andProductImageFactoryIsStubbed = () => Dep<IProductBitmapImageFactory>().Stub(f => f.GetInstance(Dep<IProductType>())).Return(ExpectedBitmapImage); Action andProductSizeIsStubbed = () => { Stub<IDisplacementProduct, IProductSize>(p => p.Size); var productBounds = new ProductBounds(Width.Feet(), Height.Feet()); Dep<IProductSize>().Stub(s => s.Bounds).Return(productBounds); }; Action andDrawingScaleIsStubbed = () => Dep<IDrawingScale>().Stub(s => s.PixelsPerFoot).Return(PixelsPerFoot); Action andProductTypeIsStubbed = () => Stub<IDisplacementProduct, IProductType>(p => p.ProductType); givenThatAProductIsSetupforInjection(); andProductImageFactoryIsStubbed(); andProductSizeIsStubbed(); andDrawingScaleIsStubbed(); andProductTypeIsStubbed(); }

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • MSTest VS2010 - DeploymentItem copying files to different locations on different machines

    - by Jack
    I have found that DeploymentItem [TestClass(), DeploymentItem(@"TestData\")] is not copying my test data files to the same location when tests are built and run on different machines. The test data files are copied to the "bin\debug" directory in the test project on my machine, but on my friend's machine they are copied to "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out". The bin\debug directory on my machine can be obtained with the code: string appDirectory = Path.GetDirectoryNameSystem.Reflection.Assembly.GetExecutingAssembly().Location; and the same code will return "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" on my friends PC. This however isn't really the problem. The problem is that the test data files I have made have a folder structure, and this folder structure is only maintained on my machine when copied to bin\debug, whereas on my friends machine only the files are added to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" directory. This means that tests will pass on my machine and fail on his! Is there a way to ensure that DeploymentItem always copys to the bin\debug folder? Or a way to ensure that the folder structure will be retained when DeploymentItem copies the files to the "TestResults\*name_machine YY-MM-DD HH_MM_SS*\Out" folder?

    Read the article

  • Why am I getting "ArgumentError: wrong number of arguments (1 for 0)" when running my rails function

    - by Hisham
    I'm stumped on what's causing this. I get this error and stack trace in all my functional tests where I call 'post'. Here is the full stack trace: 7) Error: test_should_validate(UsersControllerTest): ArgumentError: wrong number of arguments (1 for 0) /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `to_query' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:233:in `append_query_string' generated code (/Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:154):3:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:208:in `rewrite_path' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:187:in `rewrite_url' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:165:in `rewrite' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:450:in `build_request_uri' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:406:in `process' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:376:in `post' functional/users_controller_test.rb:57:in `test_should_validate' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `run' This is the test I'm running: def test_should_validate post :validate, :user => { :email => '[email protected]', :password => 'quire', :password_confirmation => 'quire', :agreed_to_terms => "true" } assert assigns(:user).errors.empty? assert_response :success end

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • How set EnqueueCallBack to my generic callback

    - by CrazyJoe
    using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsistec.Domain; using Microsistec.Client; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Collections.Generic; using Microsistec.Tools; using System.Json; using Microsistec.SystemConfig; using System.Threading; using Microsoft.Silverlight.Testing; namespace Test { [TestClass] public class SampleTest : SilverlightTest { [TestMethod, Asynchronous] public void login() { List<PostData> data = new List<PostData>(); data.Add(new PostData("email", "xxx")); data.Add(new PostData("password", MD5.GetHashString("xxx"))); WebClient.sendData(Config.DataServerURL + "/user/login", data, LoginCallBack); EnqueueCallback(?????????); EnqueueTestComplete(); } [Asynchronous] public void LoginCallBack(object sender, System.Net.UploadStringCompletedEventArgs e) { string json = Microsistec.Client.WebClient.ProcessResult(e); var result = JsonArray.Parse(json); Assert.Equals("1", result["value"].ToString()); TestComplete(); } } Im tring to set ???????? value but my callback is generic, it is setup on my WebClient .SendData, how i implement my EnqueueCallback to a my already functio LoginCallBack???

    Read the article

  • Postfix/ClamAV not stopping viruses under Virtualmin

    - by Josh
    I am using Virtualmin and have it set up to have Postfix scan incoming emails with ClamAV (using clamdscan) and delete any emails which contain a virus. However when I email myself the EICAR test string, it comes through just fine. I know ClamAV will report this file as a virus. How can I troubleshoot this / what could be wrong?

    Read the article

  • VFP Unit Matrix Multiply problem on the iPhone

    - by Ian Copland
    Hi. I'm trying to write a Matrix3x3 multiply using the Vector Floating Point on the iPhone, however i'm encountering some problems. This is my first attempt at writing any ARM assembly, so it could be a faily simple solution that i'm not seeing. I've currently got a small application running using a maths library that i've written. I'm investigating into the benifits using the Vector Floating Point Unit would provide so i've taken my matrix multiply and converted it to asm. Previously the application would run without a problem, however now my objects will all randomly disappear. This seems to be caused by the results from my matrix multiply becoming NAN at some point. Heres the code IMatrix3x3 operator*(IMatrix3x3 & _A, IMatrix3x3 & _B) { IMatrix3x3 C; //C++ code for the simulator #if TARGET_IPHONE_SIMULATOR == true C.A0 = _A.A0 * _B.A0 + _A.A1 * _B.B0 + _A.A2 * _B.C0; C.A1 = _A.A0 * _B.A1 + _A.A1 * _B.B1 + _A.A2 * _B.C1; C.A2 = _A.A0 * _B.A2 + _A.A1 * _B.B2 + _A.A2 * _B.C2; C.B0 = _A.B0 * _B.A0 + _A.B1 * _B.B0 + _A.B2 * _B.C0; C.B1 = _A.B0 * _B.A1 + _A.B1 * _B.B1 + _A.B2 * _B.C1; C.B2 = _A.B0 * _B.A2 + _A.B1 * _B.B2 + _A.B2 * _B.C2; C.C0 = _A.C0 * _B.A0 + _A.C1 * _B.B0 + _A.C2 * _B.C0; C.C1 = _A.C0 * _B.A1 + _A.C1 * _B.B1 + _A.C2 * _B.C1; C.C2 = _A.C0 * _B.A2 + _A.C1 * _B.B2 + _A.C2 * _B.C2; //VPU ARM asm for the device #else //create a pointer to the Matrices IMatrix3x3 * pA = &_A; IMatrix3x3 * pB = &_B; IMatrix3x3 * pC = &C; //asm code asm volatile( //turn on a vector depth of 3 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00020000 \n\t" "fmxr fpscr, r0 \n\t" //load matrix B into the vector bank "fldmias %1, {s8-s16} \n\t" //load the first row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.A0, C.A1 and C.A2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the second row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.B0, C.B1 and C.B2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //load the third row of A into the scalar bank "fldmias %0!, {s0-s2} \n\t" //calulate C.C0, C.C1 and C.C2 "fmuls s17, s8, s0 \n\t" "fmacs s17, s11, s1 \n\t" "fmacs s17, s14, s2 \n\t" //save this into the output "fstmias %2!, {s17-s19} \n\t" //set the vector depth back to 1 "fmrx r0, fpscr \n\t" "bic r0, r0, #0x00370000 \n\t" "orr r0, r0, #0x00000000 \n\t" "fmxr fpscr, r0 \n\t" //pass the inputs and set the clobber list : "+r"(pA), "+r"(pB), "+r" (pC) : :"cc", "memory","s0", "s1", "s2", "s8", "s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", "s17", "s18", "s19" ); #endif return C; } As far as i can see that makes sence. While debugging i've managed to notice that if i were to say _A = C prior to the return and after the ASM, _A will not necessarily be equal to C which has only increased my confusion. I had thought it was possibly due to the pointers I'm giving to the VFPU being incrimented by lines such as "fldmias %0!, {s0-s2} \n\t" however my understanding of asm is not good enough to properly understand the problem, nor to see an alternative approach to that line of code. Anyway, I was hoping someone with a greater understanding than me would be able to see a solution, and any help would be greatly appreciated, thank you :-)

    Read the article

  • externalizing junit stub objects.

    - by Ajay
    Hi!    In my project we created stub files for testing junits in java(factories) itself. However, we have to externalize these stubs. After seeing a number of serializers/deserializers, we settled on using XStream to serialize and deserialize these stub objects. XStream works like a charm. Its pretty good at what it claims to be. Previously, we had a single factory class say AFactory which produced all the stubs needed for testing different test cases. Now when externalizing each of the stub generated, we hit a road block. We had to create 1 xml file for each stub produced by the factory. For example, public final class AFactory{ public static A createStub1(){ /*Code here */} public static A createStub2(){ /*Code here */} public static A createStub3(){ /*Code here */} } Now, when trying to move this stubs to external files, we had to create 1 xml file for each stub created(A-stub1.xml, A-stub2.xml and A-stub3.xml). The problem with this approach is that, it leads to proliferation of xml stub files. I was thinking, how about keeping all the stubs related to a single bean class in a single xml file. <?xml version="1.0"?> <stubs class="A"> <stub id="stub1"> <!-- Here comes the externalized xml stub representation --> </stub> <stub id="stub2"> </stub> </stubs> Is there a framework which allows you keep all the stub in xml representation in a single xml file as above ? Or What do you guys suggest should be the right approach to adhere to ?

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >