Search Results

Search found 32072 results on 1283 pages for 'catch unit test'.

Page 35/1283 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Should the Joel Test be essential for every software company? [closed]

    - by Mahbubur R Aaman
    Joel Test has 12 steps for better code. They are: Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? Should these steps mandatory for every software companies? While recruiting programmers, then programmers should ask the company, as they follow joel steps?

    Read the article

  • Can an agile shop every really score 12 on the Joel Test?

    - by Simon
    I really like the Joel test, use it myself, and encourage my staff and interviewees to consider it carefully. However I don't think I can ever score more than 9 because a few points seem to contradict the Agile Manifesto, XP and TDD, which are the bedrocks of my world. Specifically the questions about schedule, specs, testers and quiet working conditions run counter to what we are trying to create and the values that we have adopted in being genuinely agile. So my question is whether it is possible for a true Agile shop to score 12?

    Read the article

  • Running JUnit test classes from another JUnit test class

    - by Dr. Monkey
    I have two classes that I am testing (let's call them ClassA and ClassB). Each has its own JUnit test class (testClassA and testClassB respectively). ClassA relies on ClassB for its normal functioning, so I want to make sure ClassB passes its tests before running testClassA (otherwise the results from testClassA would be meaningless). What is the best way to do this? In this case it is for an assignment so I need to keep it to the two specified test classes if possible. Can/should I throw an exception from testClassA if testClassB's tests aren't all passed? This would require testClassB to run invisibly and just report its success/failure to testClassA, rather than to the GUI (via JUnit). I am using Eclipse and JUnit 4.8.1

    Read the article

  • First TDD, Simple 2-tier C# Project - what do I unit test?

    - by Joel
    This is probably a stupid question but my googling isn't finding a satisfactory answer. I'm starting a small project in C#, with just a business layer and a data access layer - strangely, the UI will come later, and I have very little (read:no) concept / control over what it will look like. I would like to try TDD for this project. I'm using Visual Studio 2008 (soon to be 2010), I have ReSharper 5, and nUnit. Again, I want to do Test-Driven Development, but not necessarily the entire XP system. My question is - when and where do I write the first unit test? Do I only test logic before I write it, or do I test everything? It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...but it seems like the "No new code without a failing test" maxim requires this. Links or references are fine (but preferably to online resources, not books - I would like to get started ASAP). Thanks in advance for any guidance!

    Read the article

  • Is unit testing the definition of an interface necessary?

    - by HackedByChinese
    I have occasionally heard or read about people asserting their interfaces in a unit test. I don't mean mocking an interface for use in another type's test, but specifically creating a test to accompany the interface. Consider this ultra-lame and off-the-cuff example: public interface IDoSomething { string DoSomething(); } and the test: [TestFixture] public class IDoSomethingTests { [Test] public void DoSomething_Should_Return_Value() { var mock = new Mock<IDoSomething>(); var actualValue = mock.Expect(m => m.DoSomething()).Returns("value"); mock.Object.DoSomething(); mock.Verify(m => DoSomething()); Assert.AreEqual("value", actualValue); } } I suppose the idea is to use the test to drive the design of the interface and also to provide guidance for implementors on what's expected so they can draw good tests of their own. Is this a common (recommended) practice?

    Read the article

  • why does Visual Studio not enforce try-catch-block implementation?

    - by Pedro
    Coming from Eclipse/Java, I noticed that in VisualStudio/C# it is not mandatory to care about Exceptions. While Eclipse forces the user to implement a try-catch-block or to add a throws declaration, this is not the case in Visual Studio. What is the reason Visual Studio doesn't inform about unhandled exceptions? Can I configure Visual Studio to force me to implement try-catch-blocks, or at least add a compiler-warning?

    Read the article

  • htaccess; /search/?q=test to /test

    - by Matthew Haworth
    I have a similar situation to the one described in the title. All that I need to do is map all requests in the form /search/?q=test to /test. This is because we are changing the way our search works to make it user friendly, but still want to allow for backward compatability (i.e. anyone that may have these links bookmarked etc). However, thus far I have this: RedirectMatch 301 /search/?q=(.*) /$1 And that doesn't work, but: RedirectMatch 301 /search/(.*) /$1 does... Any idea why? Cheers.

    Read the article

  • Test Strategy for Addins

    - by Amit
    I was wondering if someone from the forum, can help me in uderstanding the type of Testing that will be performed, I mean a Test Strategy, we need to test things like Add-ins. Also need to know how can I reply to the person who has replied to my question, I kow its a bit not so mature quesiton, but if someone can explain. Regards Amit

    Read the article

  • FIFO semaphore test

    - by L4N0
    Hello everyone, I have implemented FIFO semaphores but now I need a way to test/prove that they are working properly. A simple test would be to create some threads that try to wait on a semaphore and then print a message with a number and if the numbers are in order it should be FIFO, but this is not good enough to prove it because that order could have occurred by chance. Thus, I need a better way of testing it. If necessary locks or condition variables can be used too. Thanks

    Read the article

  • Using Django.test.client to check template vars

    - by scott
    I've got a view that I'm trying to test with the Client object. Can I get to the variables I injected into the render_to_response of my view? Example View: def myView(request): if request.method == "POST": # do the search return render_to_response('search.html',{'results':results},context_instance=RequestContext(request)) else: return render_to_response('search.html',context_instance=RequestContext(request) Test: c = Client() response = c.post('/school/search/', {'keyword':'beagles'}) # how do I get to the 'results' variable??

    Read the article

  • How to deal with the test data in Junit?

    - by user351637
    In TDD(Test Driven Development) development process, how to deal with the test data? Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • How to define implementation details?

    - by woni
    In our project, an assembly combines logic for the IoC-Container, the project internals and the communication layer. The current version evolved to have only internal classes in addin assemblies. My main problem with this approach is, that the entry point is only available over the IoC-Container. It is not possible to use anything else than reflection to initialize the assembly. Everything behind the IoC-Interface is defined as implementation detail and therefore not intended for usages outside. It is well known that you should not test implementation detail (such as private and internal methods), because they should be tested through the public interface. It is also well known, that your tests should not use the IoC-Container to setup the SUTs, because that would result in too much dependencies. So we are using the InternalsVisibleTo-Attribute to make internals visible to our test assemblies and test the so called implementation details. I recognized that one problem could be the mixup between different concerns in that assembly, changing this would make this discussion useless, because classes have to be defined public. Ignoring my concerns with this, isn't the need to test a class enough reason to make it public, the usages of InternalsVisibleTo seems unintended, and a little bit "hacky". The approach to test only against the publicly available IoC-Container is too costly and would result in integration style tests. The pros of using internals are, that the usages are well known and do not have to be implemented like a public method would have to be (documentation, completeness, versioning,...). Is there a solution, to not test against internals, but keep their advantages over public classes, or do we have to redefine what an implementation detail is.

    Read the article

  • Please recommend the best tools to build a test plan management tool

    - by fzkl
    I have mostly worked on hardware testing in my professional career and would like to get onto the software development side. I thought working on a practically usable project will help motivate me and help acquire some skills. I have decided to build a test plan management tool for the QA team I work in (We use excel sheets!). The test plan management tool should be browser based and should support this: There would be many test plans, each test plan having test sets, test sets having test cases and test cases having instructions, attachments and Pass/fail status marking and bug info in case of failure. It should also have an export to excel option. I have a visual picture of the tool I am looking to build but I don't have enough experience to figure our where to start. My current programming skills are limited to C and shell programming and I want to pick up python. What tools (programming language, database and anything else?) would you recommend for me to get this done? Also what are the key concepts in the recommended programming language that I should focus on to build a browser based tool like this?

    Read the article

  • Log information inside a JUnit Suite

    - by Alex Marinescu
    I'm currently trying to write inside a log file the total number of failed tests from a JUnite Suite. My testsuite is defined as follows: @RunWith(Suite.class) @SuiteClasses({Class1.class, Class2.class etc.}) public class SimpleTestSuite {} I tried to define a rule which would increase the total number of errors when a test fails, but apparently my rule is never called. @Rule public MethodRule logWatchRule = new TestWatchman() { public void failed(Throwable e, FrameworkMethod method) { errors += 1; } public void succeeded(FrameworkMethod method) { } }; Any ideas on what I should to do to achieve this behaviour?

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Perform Unit Conversions with the Windows 7 Calculator

    - by Matthew Guay
    Want to easily convert area, volume, temperature, and many other units?  With the Calculator in Windows 7, it’s easy to convert most any unit into another. The New Calculator in Windows 7 Calculator received a visual overhaul in Windows 7, but at first glance it doesn’t seem to have any new functionality.  Here’s Windows 7’s Calculator on the left, with Vista’s calculator on the right.   But, looks can be deceiving.  Window’s 7’s calculator has lots of new exciting features.  Let’s try them out.  Simply type Calculator in the start menu search. To uncover the new features, click the View menu.  Here you can select many different modes, including Unit Conversion mode which we will look at. When you select the Unit Conversion mode, the Calculator will expand with a form on the left side. This conversions pane has 3 drop-down menus.  From the top one, select the type of unit you want to convert. In the next two menus, select which values you wish to convert to and from.  For instance, here we selected Temperature in the first menu, Degrees Fahrenheit in the second menu, and Degrees Celsius in the third menu. Enter the value you wish to convert in the From box, and the conversion will automatically appear in the bottom box. The Calculator contains dozens of conversion values, including more uncommon ones.  So if you’ve ever wanted to know how many US gallons are in a UK gallon, or how many knots a supersonic jet travels in an hour, this is a great tool for you!   Conclusion Windows 7 is filled with little changes that give you an all-around better experience in Windows to help you work more efficiently and productively.  With the new features in the Calculator, you just might feel a little smarter, too! Similar Articles Productive Geek Tips Add Windows Calculator to the Excel 2007 Quick Launch ToolbarEnjoy Quick & Easy Unit Conversion with Convert for WindowsCalculate with Qalculate on LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Get stats on your Ruby on Rails code TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • Does Python doctest remove the need for unit-tests?

    - by daniel
    Hi all, A fellow developer on a project I am on believes that doctests are as good as unit-tests, and that if a piece of code is doctested, it does not need to be unit-tested. I do not believe this to be the case. Can anyone provide some solid, ideally cited, examples either for or against the argument that doctests do not replace the need for unit-tests? Thank you -Daniel

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >