Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 31/547 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • How can I determine Breaking point of my Web application using JMeter?

    - by Gopu Alakrishna
    How can I determine Breaking point of my Web application using JMeter? I have executed the JMeter Testplan with different concurrent users load. EX. 300 users(0% error), 400 users(7% error in a sample, 5% error in another sample), 500 users(more than 10% error in 4 out of 6 samples). At What value of % Error, I can say system reached the Breaking point.I used concurrent users 300, 400, 500 in a PHP website. Should I consider any other parameter to determine breaking point. How many maximum concurrent users my application can support?

    Read the article

  • Any pre-rolled System.IO abstraction libraries out there for Unit Testing?

    - by Binary Worrier
    To test methods that use the file system we need to basically put System.IO behind a set of interfaces that we can then mock, I do this with a DiskIO class and interface. As my DiskIO code gets larger (and the grumblings from the we're unconvinced about this TDD thing crowd here in work get louder), I went looking for a comprehensive open source library that already does this and found . . . nothing. I may be looking in the wrong place or have approached this problem in completely the wrong way. I can't be the only idiot in this position, do these libraries exist, if so where are they? Any you've used and would recommend? Thanks P.S. I'm happy with my current approach i.e. starting with what we need, and adding only when the need arises. Unfortunately the we're unconvinced about this TDD thing crowd remain unconvinced, and think that I can't be right.

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • why my unit testing taken more than normal time to run in VS 2010 Premium [on hold]

    - by kombo
    I have only 4 proeject in my solutions. Am trying to run a unit test for one of my class in one of the project. I Create the unit test by: Right clicking on the class choose the create unit test option. I followed the wizard to the end.which resulting the test creation. I just pass the values of the parameter and run the test. but my test keep running. Both surprisingly it runs on other developers pc. NB:My class is connecting to the database and my application is asp.net web form. i know this is not recommended but i want to have my test running now. i have tried alot of samples on the internet but still my problem persist. Could any one tell me the cause of the extreme slowness(more than 30 minutes)

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • jQuery validator not working in unit testing

    - by Dbugger
    I have this small HTML file: <html> <head></head> <body> <form id='MyForm'> <input type='text' required /> <input type='submit' /> </form> <script src="/js/jquery-1.9.0.js"></script> <script src="/js/jquery.validate.js"></script> <script> var validator = $("#MyForm").validate(); alert(validator.form()); </script> </body> </html> This alerts me with "false", which is the expected behaviour. The problem comes when I go to unit testing, with js-test-driver: TestCase("MyTests", { setUp: function() { this.myform = "<form id='MyForm'><input type='text' required /><input type='submit' /></form>"; this.validator = $(this.myform).validate(); jstestdriver.console.log("Does the form validate? " + this.validator.form()); }, test_empty: function() { }, }); This code returns me the string Does the form validate? true This is a simplified version of my project of course, but the point is that I dont seem to be able to unit test the validation module im developing, since the jQuery validate plugin doesnt seem to work. What am I missing?

    Read the article

  • Visual C++ Testing problem

    - by JamesMCCullum
    Hi there I have installed VisualAssert and cFix. I have been using Visual Studio C++ and programming in CLI/C++. I have a working Chess Game Program that works perfectly by itself.....and I have been studying testing and have many examples(with tutorials) I have found on the net, that compile and run in Visual Studio..... But as soon as I try and implement those tests on my chess game......I get this problem.... This is what its telling me 1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------ 1>Compiling... 1>stdafx.cpp 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C4394: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol should not be marked with __declspec(allocate) 1>C:\Program Files\VisualAssert\include\cfixpe.h(235) : error C2393: 'CfixpCrtInitEmbeddingRegistration' : per-appdomain symbol cannot be allocated in segment '.CRT$XCX' 1>C:\Program Files\VisualAssert\include\cfixpe.h(244) : error C2440: 'initializing' : cannot convert from 'void (__cdecl *)(void)' to 'const CFIX_CRT_INIT_ROUTINE' 1> Address of a function yields __clrcall calling convention in /clr:pure and /clr:safe; consider using __clrcall in target type 1>C:\Program Files\VisualAssert\include\cfixpe.h(137) : error C3641: 'CfixpCrtInitEmbedding' : invalid calling convention '__cdecl ' for function compiled with /clr:pure or /clr:safe 1>Build log was saved at "file://c:\Users\james\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm" 1>ChessRound1 - 4 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas what I'm doing wrong? Im working with windows forms and have a heap of cpp source files. Any help would be appreciated. Thanks

    Read the article

  • EJB3Unit testing no-tx-datasource

    - by justastefan
    Hello, I am doing tests on an ejb3-project using ejb3unit http://ejb3unit.sourceforge.net/Session-Bean.html for testing. All my Services long for @PersistenceContext (UnitName=bla). I set up the ejb3unit.properties like this: ejb3unit_jndi.1.isSessionBean=true ejb3unit_jndi.1.jndiName=ejb/MyServiceBean ejb3unit_jndi.1.className=com.company.project.MyServiceBean everything works with the in-memory-database. So now i want additionally test another servicebean with @PersistenceContext (UnitName=noTxDatasource) that goes for a defined in my datasources.xml: <datasources> <local-tx-datasource> ... </local-tx-datasource> <no-tx-datasource> <jndi-name>noTxDatasource</jndi-name> <connection-url>...</connection-url> <driver-class>oracle.jdbc.OracleDriver</driver-class> <user-name>bla</user-name> <password>bla</password> </no-tx-datasource> </datasources> How do I tell ejb3unit to make this work: Object object = InitialContext.doLookup("java:/noTxDatasource"); if (object instanceof DataSource) { return ((DataSource) object).getConnection(); } else { return null; } Currently it fails saying: javax.NamingException: Cannot find the name (noTxDataSource) in the JNDI tree Current bindings: (ejb/MyServiceBean=com.company.project.MyServiceBean) How can I add this no-tx-datasource to the jndi bindings?

    Read the article

  • Testing a patch to the Rails mysql adapter

    - by Sleepycat
    I wrote a little monkeypatch to the Rails MySQLAdapter and want to package it up to use it in my other projects. I am trying to write some tests for it but I am still new to testing and I am not sure how to test this. Can someone help get me started? Here is the code I want to test: unless RAILS_ENV == 'production' module ActiveRecord module ConnectionAdapters class MysqlAdapter < AbstractAdapter def select_with_explain(sql, name = nil) explanation = execute_with_disable_logging('EXPLAIN ' + sql) e = explanation.all_hashes.first exp = e.collect{|k,v| " | #{k}: #{v} "}.join log(exp, 'Explain') select_without_explain(sql, name) end def execute_with_disable_logging(sql, name = nil) #:nodoc: #Run a query without logging @connection.query(sql) rescue ActiveRecord::StatementInvalid => exception if exception.message.split(":").first =~ /Packets out of order/ raise ActiveRecord::StatementInvalid, "'Packets out of order' error was received from the database. Please update your mysql bindings (gem install mysql) and read http://dev.mysql.com/doc/mysql/en/password-hashing.html for more information. If you're on Windows, use the Instant Rails installer to get the updated mysql bindings." else raise end end alias_method_chain :select, :explain end end end end Thanks.

    Read the article

  • Boost Unit testing memory reuse causing tests that should fail to pass

    - by Knyphe
    We have started using the boost unit testing library for a large existing code base, and I have run into some trouble with unit tests incorrectly passing, seemingly due to the reuse of memory on the stack. Here is my situation: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } The first test passed correctly, initializing all the variables. The constructor in the second unit test did not correctly set EntityType or DataPosition, but the unit test passed. I was able to get it to fail by placing some variables on the stack in the second test, like so: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { int a, b; SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } If there is only one int, only the dataPos CHECK_EQUAL fails, but if there are two, both EntityType and DataPos fail, so it seems pretty clear that this is an issue with the variables being created on the same stack memory or some such. Is there a good way to clear the memory between each unit test, or am I potentially using the library incorrectly or writing bad tests? Any help would be appreciated.

    Read the article

  • Unit testing that an event is raised in C#, using reflection

    - by Thomas
    I want to test that setting a certain property (or more generally, executing some code) raises a certain event on my object. In that respect my problem is similar to http://stackoverflow.com/questions/248989/unit-testing-that-an-event-is-raised-in-c, but I need a lot of these tests and I hate boilerplate. So I'm looking for a more general solution, using reflection. Ideally, I would like to do something like this: [TestMethod] public void TestWidth() { MyClass myObject = new MyClass(); AssertRaisesEvent(() => { myObject.Width = 42; }, myObject, "WidthChanged"); } For the implementation of the AssertRaisesEvent, I've come this far: private void AssertRaisesEvent(Action action, object obj, string eventName) { EventInfo eventInfo = obj.GetType().GetEvent(eventName); int raisedCount = 0; Action incrementer = () => { ++raisedCount; }; Delegate handler = /* what goes here? */; eventInfo.AddEventHandler(obj, handler); action.Invoke(); eventInfo.RemoveEventHandler(obj, handler); Assert.AreEqual(1, raisedCount); } As you can see, my problem lies in creating a Delegate of the appropriate type for this event. The delegate should do nothing except invoke incrementer. Because of all the syntactic syrup in C#, my notion of how delegates and events really work is a bit hazy. How to do this?

    Read the article

  • Testing a Gui-heavy WPF application.

    - by Hamish Grubijan
    We (my colleagues) have a messy 12 y.o. mature app that is GUI-based, and the current plan is to add new dialogs & other GUI in WPF, as well as replace some of the older dialogs in WPF as well. At the same time we wish to be able to test that Monster - GUI automation in a maintainable way. Some challenges: The application is massive. It constantly gains new features. It is being changed around (bug fixes, patches). It has a back end, and a layer in-between. The state of it can get out of whack if you beat it to death. What we want is: Some tool that can automate testing of WPF. auto-discovery of what the inputs and the outputs of the dialog are. An old test should still work if you add a label that does nothing. It should fail, however, if you remove a necessary text field. It would be very nice if the test suite was easy to maintain, if it ran and did not break most of the time. Every new dialog should be created with testability in mind. At this point I do not know exactly what I want, so I am marking this as a community wiki. If having to test a huge GUI-based app rings the bell (even if not in WPF), then please share your good, bad and ugly experiences here.

    Read the article

  • Visual C++ overrides/mock objects for unit testing?

    - by Mark
    When I'm running unit tests, I want to be able to "stub out" or create a mock object, but I'm running into DLL Hell. For example: There are two DLL libraries built: A.dll and B.dll -- Classes in A.dll have calls to classes in B.dll so when A.dll was built, the link line was using B.lib for the defintions. My test driver (Foo.exe) is testing classes in A.dll, so it links against A.lib. However, I want to "stub out" some of the calls A.dll makes to B.dll with simple versions (return basic value, no DB look up, etc). I can't build an Override.dll that just overrides the needed methods (not entire classes) and replace B.dll because Foo.exe will A) complain that B.dll is missing if I just remove it and put Override.dll in it's place or B) if I rename Override.dll to B.dll, Foo.exe complains that there are unresolved symbols because Override.dll is not a complete implementation of B.dll. Is there a way to do this? Is there a way to statically link Foo.exe with A.lib, B.lib and Override.lib such that it will work without having to completely rebuild A.lib and B.lib to remove the __delcspec(dllexport)? Is there another option?

    Read the article

  • maven and unit testing - combining maven surefire plugin AND testNG eclipse plugin

    - by lisak
    Hey, could you please share your way of unit testing in eclipse ? Are you using surefire plugin, m2eclipse & maven, or only testNG eclipse plugin ? Do you combine these alternatives ? I'm using testNG + maven surefire-plugin and I had been using the testNG eclipse plugin a year ago so that I could see the results in testNG view. Then I started using Maven, but when I do "maven test phase" using m2eclipse, there is only console output and surefire reports that I can check in browser and to choose what test suite, test, or test method can be set up only via testng.xml. On the other hand, if you use only surefire plugin and you have some specific settings regarding classpath etc., that you rely on, then running tests via testNG eclipse plugin doesn't have to be compatible with your code. Using surefire plugin, the classpath is different - target/test-classes and target/classes - than using testNG plugin, that is using the project classpath. How do you go about what I was just talking about? Is it possible to synchronize "maven test" using m2eclipse and surefire plugin WITH testNG eclipse plugin and view ? EDITED: I'm also wondering, why the Maven project ("Java build path") output folder is target/classes for src/main and src/test whereas surefire plugin makes two locations target/test-classes and target/classes Thank you very much for your your opinions.

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Load Testing Java Web Application - find TPS / Avg transaction response time

    - by Steve
    I would like to build my own load testing tool in Java with the goal of being able to load test a web application I am building throughout the development cycle. The web application will be receiving server to server HTTP Post requests and I would like to find its starting transaction per second (TPS) capacity along with the avgerage response time. The Post request and response messages will be in XML (I dont' think that's really applicable though :) ). I have written a very simple Java app to send transactions and count how many transactions it was able to send in one second (1000 ms) however I don't think this is the best way to load test. Really what I want is to send any number of transactions at exactly the same time - i.e. 10, 50, 100 etc. Any help would be appreciated! Oh and here is my current test app code: Thread[] t = new Thread[1]; for (int a = 0; a < t.length; a++) { t[a] = new Thread(new MessageLoop()); } startTime = System.currentTimeMillis(); System.out.println(startTime); for (int a = 0; a < t.length; a++) { t[a].start(); } while ((System.currentTimeMillis() - startTime) < 1000 ) { } if ((System.currentTimeMillis() - startTime) > 1000 ) { for (int a = 0; a < t.length; a++) { t[a].interrupt(); } } long endTime = System.currentTimeMillis(); System.out.println(endTime); System.out.println("Total time: " + (endTime - startTime)); System.out.println("Total transactions: " + count); private static class MessageLoop implements Runnable { public void run() { try { //Test Number of transactions while ((System.currentTimeMillis() - startTime) < 1000 ) { // SEND TRANSACTION HERE count++; } } catch (Exception e) { } } }

    Read the article

  • Testing ActionMailer's receive method (Rails)

    - by Brian Armstrong
    There is good documentation out there on testing ActionMailer send methods which deliver mail. But I'm unable to figure out how to test a receive method that is used to parse incoming mail. I want to do something like this: require 'test_helper' class ReceiverTest < ActionMailer::TestCase test "parse incoming mail" do email = TMail::Mail.parse(File.open("test/fixtures/emails/example1.txt",'r').read) assert_difference "ProcessedMail.count" do Receiver.receive email end end end But I get the following error on the line which calls Receiver.receive NoMethodError: undefined method `index' for #<TMail::Mail:0x102c4a6f0> /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:128:in `gets' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:392:in `parse_header' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:139:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/stringio.rb:43:in `open' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/port.rb:340:in `ropen' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:138:in `initialize' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `new' /Library/Ruby/Gems/1.8/gems/tmail-1.2.7.1/lib/tmail/mail.rb:123:in `parse' /Library/Ruby/Gems/1.8/gems/actionmailer-2.3.4/lib/action_mailer/base.rb:417:in `receive' Tmail is parsing the test file I have correctly. So that's not it. Thanks!

    Read the article

  • Unit Testing the Use of TransactionScope

    - by Randolpho
    The preamble: I have designed a strongly interfaced and fully mockable data layer class that expects the business layer to create a TransactionScope when multiple calls should be included in a single transaction. The problem: I would like to unit test that my business layer makes use of a TransactionScope object when I expect it to. Unfortunately, the standard pattern for using TransactionScope is a follows: using(var scope = new TransactionScope()) { // transactional methods datalayer.InsertFoo(); datalayer.InsertBar(); scope.Complete(); } While this is a really great pattern in terms of usability for the programmer, testing that it's done seems... unpossible to me. I cannot detect that a transient object has been instantiated, let alone mock it to determine that a method was called on it. Yet my goal for coverage implies that I must. The Question: How can I go about building unit tests that ensure TransactionScope is used appropriately according to the standard pattern? Final Thoughts: I've considered a solution that would certainly provide the coverage I need, but have rejected it as overly complex and not conforming to the standard TransactionScope pattern. It involves adding a CreateTransactionScope method on my data layer object that returns an instance of TransactionScope. But because TransactionScope contains constructor logic and non-virtual methods and is therefore difficult if not impossible to mock, CreateTransactionScope would return an instance of DataLayerTransactionScope which would be a mockable facade into TransactionScope. While this might do the job it's complex and I would prefer to use the standard pattern. Is there a better way?

    Read the article

  • Need an advice for unit testing using mock object

    - by Andree
    Hi there, I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem. I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from. Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case). Now I want to create a unit test for the User model, but then I encountered a design issue: In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object. The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source. I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem? Regards, Andree

    Read the article

  • Unit Testing an Event Firing From a Thread

    - by Dougc
    I'm having a problem unit testing a class which fires events when a thread starts and finishes. A cut down version of the offending source is as follows: public class ThreadRunner { private bool keepRunning; public event EventHandler Started; public event EventHandler Finished; public void StartThreadTest() { this.keepRunning = true; var thread = new Thread(new ThreadStart(this.LongRunningMethod)); thread.Start(); } public void FinishThreadTest() { this.keepRunning = false; } protected void OnStarted() { if (this.Started != null) this.Started(this, new EventArgs()); } protected void OnFinished() { if (this.Finished != null) this.Finished(this, new EventArgs()); } private void LongRunningMethod() { this.OnStarted(); while (this.keepRunning) Thread.Sleep(100); this.OnFinished(); } } I then have a test to check that the Finished event fires after the LongRunningMethod has finished as follows: [TestClass] public class ThreadRunnerTests { [TestMethod] public void CheckFinishedEventFiresTest() { var threadTest = new ThreadRunner(); bool finished = false; object locker = new object(); threadTest.Finished += delegate(object sender, EventArgs e) { lock (locker) { finished = true; Monitor.Pulse(locker); } }; threadTest.StartThreadTest(); threadTest.FinishThreadTest(); lock (locker) { Monitor.Wait(locker, 1000); Assert.IsTrue(finished); } } } So the idea here being that the test will block for a maximum of one second - or until the Finish event is fired - before checking whether the finished flag is set. Clearly I've done something wrong as sometimes the test will pass, sometimes it won't. Debugging seems very difficult as well as the breakpoints I'd expect to be hit (the OnFinished method, for example) don't always seem to be. I'm assuming this is just my misunderstanding of the way threading works, so hopefully someone can enlighten me.

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >