Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 51/328 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Professional Developers, may I join you?

    - by Ben
    I currently work in technical support for a software/hardware company and for the most part it's a good job, but it's feeling more and more like I'm getting 'stuck' here. No raises in the 5 years I've been here, and lately there seems to be more hiring from the outside than promotion from within. The work I do is more technical than end-user support, as we deal primarily with our field technicians who have a little more technical skill than the general user base. As a result I get into much more technical support issues... often tracking down bugs in our software, finding performance bottlenecks in our database schema, etc. The work I'm most proud of are the development projects I've come up with on my own, and worked on during lunch breaks and slow periods in Support. Over the years I've written a number of useful utilities for the company. Diagnostic type applications that several departments use and appreciate. These include apps that simulate our various hardware devices, log file analysis, time-saving utilities for our work processes, etc. My best projects have been the hardware simulation programs, which are the type of thing we probably wouldn't have put a full-time developer on had anyone thought to do it, but they've ended up being popular and useful enough to be used by development, QA, R&D, and Support. They allow us to interface our software with simulated hardware, rather than clutter up our work areas with bulky, hard to acquire equipment. Since starting here my life has moved forward (married, kid, one more on the way), but it feels like my career has not. I still earn what I earned walking in the door my first day. Company budget is tight, bonuses have gone down, and no raises or cost of living / inflation adjustments either. As the sole source of income for my family I feel I need to do more, and I'd like to have a more active role in creating something at work, not just cleaning up other people's mistakes. I enjoy technical work, and I think development is the next logical step in my career. I'd like to bring some "legitimacy" to my part-time development work, and make myself a more skilled and valuable employee. Ultimately if this can help me better support my family, that would be ideal. Can I make the jump to professional developer? I have an engineering degree, but no formal education in computer science. I write WinForms apps using the .NET framework, do some freelance web development, have volunteered to write software for a nonprofit, and have started experimenting with programming microcontrollers. I enjoy learning new things in the limited free time I have available. I think I have the aptitude to take on a development role, even in an 'apprentice' capacity if such an option is possible. Have any of you moved into development like this? Do any of you developers have any advice or cautionary tales? Are there better career options I haven't thought of? I welcome any and all related comments and thank you in advance for posting them.

    Read the article

  • Standards Corner: Preventing Pervasive Monitoring

    - by independentid
     Phil Hunt is an active member of multiple industry standards groups and committees and has spearheaded discussions, creation and ratifications of industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt On Wednesday night, I watched NBC’s interview of Edward Snowden. The past year has been tumultuous one in the IT security industry. There has been some amazing revelations about the activities of governments around the world; and, we have had several instances of major security bugs in key security libraries: Apple's ‘gotofail’ bug  the OpenSSL Heartbleed bug, not to mention Java’s zero day bug, and others. Snowden’s information showed the IT industry has been underestimating the need for security, and highlighted a general trend of lax use of TLS and poorly implemented security on the Internet. This did not go unnoticed in the standards community and in particular the IETF. Last November, the IETF (Internet Engineering Task Force) met in Vancouver Canada, where the issue of “Internet Hardening” was discussed in a plenary session. Presentations were given by Bruce Schneier, Brian Carpenter,  and Stephen Farrell describing the problem, the work done so far, and potential IETF activities to address the problem pervasive monitoring. At the end of the presentation, the IETF called for consensus on the issue. If you know engineers, you know that it takes a while for a large group to arrive at a consensus and this group numbered approximately 3000. When asked if the IETF should respond to pervasive surveillance attacks? There was an overwhelming response for ‘Yes'. When it came to 'No', the room echoed in silence. This was just the first of several consensus questions that were each overwhelmingly in favour of response. This is the equivalent of a unanimous opinion for the IETF. Since the meeting, the IETF has followed through with the recent publication of a new “best practices” document on Pervasive Monitoring (RFC 7258). This document is extremely sensitive in its approach and separates the politics of monitoring from the technical ones. Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise. The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM. The draft goes on to further qualify what it means by “attack”, clarifying that  The term is used here to refer to behavior that subverts the intent of communicating parties without the agreement of those parties. An attack may change the content of the communication, record the content or external characteristics of the communication, or through correlation with other communication events, reveal information the parties did not intend to be revealed. It may also have other effects that similarly subvert the intent of a communicator.  The past year has shown that Internet specification authors need to put more emphasis into information security and integrity. The year also showed that specifications are not good enough. The implementations of security and protocol specifications have to be of high quality and superior testing. I’m proud to say Oracle has been a strong proponent of this, having already established its own secure coding practices. 

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Having problems building OpenCV 2.0 on CentOS 5?

    - by Hayri Ugur KOLTUK
    Hi all! I'd been trying to install OpenCV library to my centos system however when i type make and hit enter after configuring with cmake, i get the following error: [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/amoments.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/affine3d_estimator.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/acontours.o [100%] Building CXX object tests/cv/CMakeFiles/cvtest.dir/src/areprojectImageTo3D.o Linking CXX executable ../../bin/cvtest CMakeFiles/cvtest.dir/src/highguitest.o: In function CV_HighGuiTest::run(int)': highguitest.cpp:(.text._ZN14CV_HighGuiTest3runEi+0x15): warning: the use oftmpnam' is dangerous, better use `mkstemp' [100%] Built target cvtest make: * [all] Error 2 and interesting, once i got this error: [ 99%] Built target mltest [ 99%] Generating generated0.i Traceback (most recent call last): File "/home/proje/OpenCV-2.1.0/interfaces/python/gen.py", line 43, in ? if True in has_init and not all(has_init[has_init.index(True):]): NameError: name 'all' is not defined make[2]: * [interfaces/python/generated0.i] Error 1 make[1]: [interfaces/python/CMakeFiles/cvpy.dir/all] Error 2 make: ** [all] Error 2 What possibly is the cause of these errors? I need to install opencv immediately on this computer. Best regards, Hayri Ugur KOLTUK

    Read the article

  • The problem of outsourcing

    - by Dave
    I work on a project for like two years..but because all the programming stuff was outsourced I dun hav a chance to do more prograing or even technical work... I got very demoralized that nowadaysy job scope is all about delegating other to do the work but I can't learn much technical stuff? What should I do to let myself do more technical stuff like programming at work I thought of coming up with tools that will aid my main application or just resolve issues on my own which I doing so right now any other suggestion anyone?

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • Problem's running unittest test suite OO

    - by chrissygormley
    Hello, I have a test suite to perform smoke tests. I have all my script stored in various classes but when I try and run the test suite I can't seem to get it working if it is in a class. The code is below: (a class to call the tests) from alltests import SmokeTests class CallTests(SmokeTests): def integration(self): self.suite() if __name__ == '__main__': run = CallTests() run.integration() And the test suite: class SmokeTests(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('external_sanity', 'internal_sanity') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests unittest.main(defaultTest='suite') This output's an error: Attribute Error: 'module' object has no attribute 'suite' So I can see how to call a normal function defined but I'm finding it difficult calling in the suite. In one of the tests the suite is set up like so: class InternalSanityTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeInternalSanityTestSuite(): suite = unittest.TestSuite() suite.addTest(TestInternalSanity("BasicInternalSanity")) suite.addTest(TestInternalSanity("VerifyInternalSanityTestFail")) return suite def suite(): return unittest.makeSuite(TestInternalSanity) Can anyone help me with getting this running? Thanks for any help in advance.

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • NSInteger differences between CLI and GUI ?

    - by d11wtq
    I've been building a framework and writing unit tests in GHUnit. One of my Framework's accessor methods returns an NSInteger. I assert the expected value in the tests like this: GHAssertEquals(1320, request.port, @"Port number should be 1320"); When running my tests with an AppKit UI based frontend this assertion passes. However, when I run my tests on the command line, it fails with a type-mismatch unless I type-cast my hard-coded 1320 as (NSInteger). What's causing the difference in the way the integer is being interpreted by the compiler? Is xcodebuild on the command line using a different data-type for hard coded integers?

    Read the article

  • Debug using MbUnit/Gallio 3.1

    - by user314096
    When I use the [Debug] button in Gallio, the breakpoints in my unit tests are not hitting. The unit tests are written with MbUnit/Gallio. I am using MbUnit/Gallio version 3.1 build 397 with Visual Studio 2010 Beta 2. The unit tests run to completion in Gallio Icarus, but they run past the breakpoints. I see the symbol tables loading in VS, but it does not stop at the expected breakpoint, so I am unable to debug it.

    Read the article

  • Getting black images with selenium.captureScreenshot

    - by Lidia
    I'm executing selenium tests with testng, that are started on a remote system with Selenium RC via hudson (with ssh connection). The remote system is windows xp with MKS Toolkit installed, hence ssh. Tests are NOT executed as a windows service. I've tried using both captureScreenshot and captureEntirePageScreenshot methods. The first one always produces a black image. The second one creates the correct screen shot but it only works on Firefox and our tests usually pass on Firefox and fail in other browsers, so it is crucial to capture screen shots for the other browsers (mainly IE and Safari). The tests are ran in parallel, with many browser windows open at the same time. I'm not certain if this is what's causing the problem. Any thoughts will be appreciated.

    Read the article

  • How to test UI interaction of Silverlight dialogs?

    - by Bernard Vander Beken
    I am using Silverlight 3.0 Unit Testing, version Silverlight Toolkit November 2009. Apart from unit tests, it allows to do UI interaction tests, typically using AutomationPeer subclasses (eg ButtonAutomationPeer to interact with a Button). Are there AutomationPeer classes to test the interaction with the following: OpenFileDialog SaveFileDialog MessageBox In unit tests it would be possible to stub these, but for integration and browser testing it would be great to have this testable.

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • How to pass variables using Unittest suite

    - by chrissygormley
    Hello I have test's using unittest. I have a test suite and I am trying to pass variables through into each of the tests. The below code shows the test suite used. class suite(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('testmodule1', 'testmodule2') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests It calls tests, I would like to know how to pass variables into the tests from this class. An example test script is below: class TestThis(unittest.TestCase): def runTest(self): assertEqual('1', '1') class TestThisTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeTestThisTestSuite(): suite = unittest.TestSuite() suite.addTest("TestThis") return suite def suite(): return unittest.makeSuite(TestThis) if __name__ == '__main__': unittest.main() So from the class suite() I would like to enter in a value to change the value that is in assert value. Eg. assertEqual(self.value, '1'). I have tried sys.argv for unittest and it doesn't seem to work. Thanks for any help.

    Read the article

  • Too Many Public Methods Forced by Test Driven Development

    - by RoryG
    A very specific question from a novice to TDD: I seperate my tests and my app into different packages. Thus, most of my app methods have to be public for tests to access them. As I progress, it becomes obvious that some methods could become private, but if I make that change, the tests that access them won't work. Am I missing a step, or doing something wrong, or is this just one downfall of TDD?

    Read the article

  • TDD test data loading methods

    - by Dave Hanson
    I am a TDD newb and I would like to figure out how to test the following code. I am trying to write my tests first, but I am having trouble for creating a test that touches my DataAccessor. I can't figure out how to fake it. I've done the extend the shipment class and override the Load() method; to continue testing the object. I feel as though I end up unit testing my Mock objects/stubs and not my real objects. I thought in TDD the unit tests were supposed to hit ALL of the methods on the object; however I can never seem to test that Load() code only the overriden Mock Load My tests were write an object that contains a list of orders based off of shipment number. I have an object that loads itself from the database. public class Shipment { //member variables protected List<string> _listOfOrders = new List<string>(); protected string _id = "" //public properties public List<string> ListOrders { get{ return _listOfOrders; } } public Shipment(string id) { _id = id; Load(); } //PROBLEM METHOD // whenever I write code that needs this Shipment object, this method tries // to hit the DB and fubars my tests // the only way to get around is to have all my tests run on a fake Shipment object. protected void Load() { _listOfOrders = DataAccessor.GetOrders(_id); } } I create my fake shipment class to test the rest of the classes methods .I can't ever test the Real load method without having an actual DB connection public class FakeShipment : Shipment { protected new void Load() { _listOfOrders = new List<string>(); } } Any thoughts? Please advise. Dave

    Read the article

  • How to get Eclipse + PyDev + App Engine + Unit testing to work?

    - by PEZ
    I want to run my unit tests for a Python Google App Engine project using Run As = Python unit-test But when I try that all my Model tests bail with the error message: BadArgumentError: app must not be empty. Anyone got this to work? NB: The tests runs fine using Nose --with-gae. But I want the PyDev integration with hyperlinking of resources and such.

    Read the article

  • How can I specifiy JUnit test dependencies?

    - by Egon Willighagen
    Our toolkit has over 15000 JUnit tests, and many tests are known to fail if some other test fails. For example, if the method X.foo() uses functionality from Y.foo() and YTest.testFoo() fails, then XTest.testFoo() will fail too. Obviously, XTest.testFoo() can also fail because of problems specific to X.foo(). While this is fine and I still want both tests run, it would be nice if one could annotate a test dependency with XTest.testFoo() pointing to YTest.testFoo(). This way, one could immediately see what functionality used by X.foo() is also failing, and what not. Is there such annotation available in JUnit or elsewhere? Something like: public YTests { @Test @DependsOn(method=org.example.tests.YTest#testFoo) public void testFoo() { // Assert.something(); } }

    Read the article

  • Is it possible to compile IronRuby code to a .NET assembly (EXE or DLL)

    - by Chris Ammerman
    My scenario consists of the following points. I have a packaged software product I am developing in C# Since it is a packaged product, the public interfaces of the assemblies need to be tightly controlled... All assemblies are strong-named Any classes that don't absolutely have to be "public" are "internal" I want to write unit tests for those "internal" classes, since they are the bulk of the code And finally.... I want to try writing the unit tests in Ruby. Since the unit tests would be external to the assembly containing the code under test, the assemblies under test would each need to have an "InternalsVisibleTo" attribute specifying the name of the unit test assembly. Which of course would mean that the Ruby unit tests would have to compile down to a .NET assembly so they can be given access in this way. Can this be done? If so, how? All I can find on the web about "compiling IronRuby" is about building the actual IronRuby runtime from source.

    Read the article

  • Opposite of Bloom filter?

    - by abc
    Hi, I'm trying to optimize a piece of software which is basically running millions of tests. These tests are generated in such a way that there can be some repetitions. Of course, I don't want to spend time running tests which I already ran if I can avoid it efficiently. So, I'm thinking about using a Bloom filter to store the tests which have been already ran. However, the Bloom filter errs on the unsafe side for me. It gives false positives. That is, it may report that I've ran a test which I haven't. Although this could be acceptable in the scenario I'm working on, I was wondering if there's an equivalent to a Bloom filter, but erring on the opposite side, that is, only giving false negatives. I've skimmed through the literature without any luck.

    Read the article

  • Unit testing an iPhone static library with XCode 3

    - by teabot
    I am writing a number of static libraries for the iPhone and wish also to have suites of unit tests. XCode 3 provides templates for both static libraries and unit tests but I am wondering how they should fit together in a static library project? In my static library project I have created a target for unit testing but expect to also create an executable to kick off the unit tests than run against the classes in the static library. What is the procedure for doing this?

    Read the article

  • Questions on about TDD or unit testing in ASP.NET MVC

    - by Diego
    I've been searching on how to do Unit testing and find thats is quite easy, but, what I want to know is, In a asp.net mvc application, what should be REALLY important to test and which methods you guys use? I just can't find a clear answer on about WHAT TO REALLY TEST when programming unit tests. I just don't want to make unecessary tests and loose developement time doing overkill tests.

    Read the article

  • How do I test controllers and views?

    - by ryeguy
    I'm using rails for the first time, and I love how test-oriented it is and how it encourages you to write tests. I'm just having a hard time figuring out what I should be testing when I test controllers and views. I know that you should test redirects and authorization in the controller tests, but what else? And what should go in view tests? If I'm "following the rules" and only putting loops, conditionals, and echoes in my views, then what is there left to test?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >