Search Results

Search found 8582 results on 344 pages for 'integration tests'.

Page 78/344 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How do you revert a file to a revision within an integration in Perforce?

    - by tenpn
    I have two branches, let's call them mainline and dev1. I regularly integrate a file from mainline to dev1. The last-but-one time I integrated the file, it was at revision 3 in mainline. The last time, it was at revision 5. Now for mysterious reasons lost to the sands of time, I want to work in dev1 with revision 4 of the file from mainline. Is that possible? I can't integrate it across as P4V complains that all revisions have already been integrated. I've tried right-click-get this revision on the revision graph, but that only updates which version of the file I have in mainline, not in dev1.

    Read the article

  • Is there any way to validate the Open Graph protocol meta tags for Facebook integration?

    - by John
    Is there any way to validate the Facebook Open Graph protocol meta tags in the head section of my website? Code below. <meta property="og:title" content="my content" /> <meta property="og:type" content="company" /> <meta property="og:url" content="http://mycompany.com/" /> <meta property="og:image" content="http://mycompany.com/image.png" /> <meta property="og:site_name" content="my site name" /> <meta property="fb:admins" content="my_id" /> <meta property="og:description" content="my description" /> -edit- I mean validating the html. Sorry for the confusion! Right now my site isn't valid because of these tags.

    Read the article

  • What's the status of the HTML 5 <video> tag and webcam integration?

    - by JorenB
    Even though it seems to be in some kind of jeopardy, the open video standard is a great idea. I saw some demos on motion tracking with it - just proofs-of-concept, but interesting nonetheless. Now, I'd say that concepts like these would really be a gain, if there would be access to the user's webcam... Just imagine browsing through Flickr with your hands in mid-air. I have Googled a little, but I can't find any detailed discussion on the subject. It is mentioned in some places, but that doesn't get me very far. Does anybody know whether support for this is planned? If yes, any prognosis on the 'when'? ;-) Of course, I guess they'd have to dream up a pretty good security model for it...

    Read the article

  • How Can I Run a Regex that Tests Text for Characters in a Particular Alphabet or Script?

    - by Eli
    I'd like to make a regex in Perl that will test a string for a characters in a particular string. This would be something like: $text =~ .*P{'Chinese'}.* Is there a simple way of doing this, for English it's pretty easy by just testing for [a-zA-Z], but for a script like Chinese, or one of the Japanese scripts, I can't figure out any way of doing this short of writing out every character explicitly, which would make for some very ugly code. Ideas? I can't be the first/only person that's wanted to do this.

    Read the article

  • Test run errors with MSTest in VS2010

    - by Tomas Lycken
    When I run my Unit Tests, all tests pass, but instead of "Test run succeeded" or whatever the success message is, I get "Test run error" in the little bar that tells me how many of my tests pass, even though all my tests passed. When i click the text, I'm taken to a page that tells me the following two things happened: Warning: conflict during test run deployment: deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by the test container [...]\Booking.Web.Tests.dll cannot be deployed to 'Booking.Web.dll' because otherwise the file '[...]\Booking.Web.dll' would override deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by '[...]\Booking.Web.Tests.dll' Error: Cannot initialize the ASP.NET project 'Booking.Web' Exception was thrown: The website could not be configured correctly; getting ASP.NET proccess information failed. Requesting 'http://localhost:54131/VSEnterpriseHelper.axd' returned an error: The remote server returned an error: (500) Internal Server Error. I don't understand half of what it's complaining about. How do I get rid of these errors? (And for reference: Booking.Web is an ASP.NET MVC 2 project, Booking.Web.Tests is a Test project, [...] is the full local path to the projects in my environment, in most of the cases above to the /bin/debug/ folder inside the Booking.Web project)

    Read the article

  • How to change TestNG dataProvider order

    - by momad
    Hi, I am running hundreds of tests against a large publishing system and would like to paralellize the tests using TestNG. However, I cannot find any easy way of doing this. Each test case instanciates an instance of this publisher, send some messages, wait for those messages to be published, then dump out the contents of the publish queues and compare against expected outcome. Doing this with so many tests (even if I paralellize using threads, still takes a very long time to complete (1 day or more)). We've found that in testing this sort of system, it's best to start up system once, run all tests to send their messages, wait for publish to do its thing, dump all outputs, and match outputs with tests and verify. For example, instead of the following: @Test public void testRule1() { Publisher pub = new Publisher(); pub.sendRule(new Rule("test1-a")); sleep(10); // wait 10 seconds pub.dumpRules(); verifyRule("test1-a"); } We wanted to do something like the following: @Test public void testRule1(bool sendMode) { if(sendMode) { this.pub.sendRule(new Rule("test1-a")); } else { verifyRule("test1-a"); } } Where you have a dataProvider run through all the tests with sendMode = true and then perform dumpAllRules() followed by running through all of the tests again with sendMode = false. The problem is, TestNG calls the same method twice, once with sendMode = true followed by sendMode = false. Is there anyway to accomplish this in TestNG? Thanks!

    Read the article

  • How to implement or emulate an "abstract" OCUnit test class?

    - by Quinn Taylor
    I have a number of Objective-C classes organized in an inheritance hierarchy. They all share a common parent which implements all the behaviors shared among the children. Each child class defines a few methods that make it work, and the parent class raises an exception for the methods designed to be implemented/overridden by its children. This effectively makes the parent a pseudo-abstract class (since it's useless on its own) even though Objective-C doesn't explicitly support abstract classes. The crux of this problem is that I'm unit testing this class hierarchy using OCUnit, and the tests are structured similarly: one test class that exercises the common behavior, with a subclass corresponding to each of the child classes under test. However, running the test cases on the (effectively abstract) parent class is problematic, since the unit tests will fail in spectacular fashion without the key methods. (The alternative of repeating the common tests across 5 test classes is not really an acceptable option.) The non-ideal solution I've been using is to check (in each test method) whether the instance is the parent test class, and bail out if it is. This leads to repeated code in every test method, a problem that becomes increasingly annoying if one's unit tests are highly granular. In addition, all such tests are still executed and reported as successes, skewing the number of meaningful tests that were actually run. What I'd prefer is a way to signal to OCUnit "Don't run any tests in this class, only run them in its child classes." To my knowledge, there isn't (yet) a way to do that, something similar to a +(BOOL)isAbstractTest method I can implement/override. Any ideas on a better way to solve this problem with minimal repetition? Does OCUnit have any ability to flag a test class in this way, or is it time to file a Radar? Edit: Here's a link to the test code in question. Notice the frequent repetition of if (...) return; to start a method, including use of the NonConcreteClass() macro for brevity.

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • Running RSpec on Google App Engine via JRuby

    - by Carl
    I'm trying to write some tests (RSpec) against the AppEngine and its datastore. I've tried to load the environment and tests via: appcfg.rb run -S spec app/tests/ And I end up with the following error: spec:19: undefined method `bin_path' for Gem:Module (NoMethodError) I can run non-appengine specs just fine by running: spec app/tests/ Any suggestions on how to get RSpec up and running with JRuby and Google App Engine would be greatly appreciated. Thank you.

    Read the article

  • Null reference to DataContext when testing an ASP.NET MVC app with NUnit

    - by user252160
    I have an ASP.NET MVC application with a separate project added for tests. I know the plusses and minuses of using the connection to the database when running unit tests, and I still want to use it. Yet, every time when I run the tests with the NUnit tool, they all fail due to my Data Context being null. I heard something about having a separate config file for the tests assembly, but i am not sure whether I did it properly, or whether that works at all.

    Read the article

  • Using Selenium IDE with random values

    - by Toby Hede
    Is it possible to create Selenium tests using the Firefox plugin that use randomly generated values to help do regression tests? The full story: I would like to help my clients do acceptance testing by providing them with a suite of tests that use some smarts to create random (or at least pseudo-random) values for the database. One of the issues with my Selenium IDE tests at the moment is that they have predefined values - which makes some types of testing problematic.

    Read the article

  • Old operational master still thinks it is the "one"

    - by Doug
    Hi there, I have a domain with 3 AD servers for now i'll just call them: AD01 (Win 2008 GC, Operations master) AD02 (Win 2008 GC) AD03 (Win 2003 GC) A couple of months there was some hardware issues with AD01 so the operations master, PDC and Infrastructure Master was moved to AD02. All machines where on while this was happening. AD01 (Win 2008 GC) AD02 (Win 2008 GC, Operations master) AD03 (Win 2003 GC) AD01 was then shutdown for a month. Upon starting this machine up with replaced hardware (NIC and RAID card) i now have a weird problem. AD01 Thinks it is operations master still in AD on the local box AD02 & AD03 Thinks AD02 is operations master in AD on both boxes When running DCDIAG on AD01 i get a number of issues (listed below) When running "dcdiag /test:advertising" on AD01: Doing primary tests Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Running partition tests on : ForestDnsZones Running partition tests on : DomainDnsZones Running partition tests on : Schema Running partition tests on : Configuration Running partition tests on : domain Running enterprise tests on : domain.local When running "dcdiag" on AD01 i get the following errors (excerpt of the Final output): Testing server: Default-First-Site-Name\AD01 Starting test: Advertising Warning: DsGetDcName returned information for \\ad02.domain.local, when we were trying to reach AD01. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... AD01 failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=domain,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=domain,DC=local Starting test: Replications [Replications Check,Replications Check] Inbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_INBOUND_REPL" [Replications Check,AD01] Outbound replication is disabled. To correct, run "repadmin /options AD01 -DISABLE_OUTBOUND_REPL" So the problem appeasr to be that when i moved the operations master, AD01 never got the memo, and now that it's started up, all the other AD servers don't think its the boss anymore when it trys to replicate etc. So i really need to manually update AD01 so that it knows who the operations master, instrastructure and PDC is - but i'm not having any luck I've been googling for nearly a day and all solutions lead to "the cake is a lie" Your ninja skills will be greatly appreciated

    Read the article

  • Active Directory Partition Error

    - by BLAKE
    Right now my active directory is failing a dcdiag test. I can find no info online about this error. When I run dcdiag /test:crossrefvalidation, I get the output: .... Doing primary tests Testing server: Default-First-Site-Name\ad01 Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : mydomain Starting test: CrossRefValidation ......................... mydomain passed test CrossRefValidation Running partition tests on : t Starting test: CrossRefValidation This cross-ref has a non-standard dNSRoot attribute. Cross-ref DN: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration, DC=mydomain,DC=com nCName attribute (Partition name): DC=t Bad dNSRoot attribute: dc01.mydomain.com Check with your network administrator to make sure this dNSRoot attribute is correct, and if not please change the attribute to the value below. dNSRoot should be: t It appears this partition (DC=t) failed to get completely created. This cross-ref (CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configurat ion,DC=mydomain,DC=com) is dead and should be removed from the Active Directory. ......................... t failed test CrossRefValidation .... I used LDP from the windows support tools. I searched for the dnsRoot attribute in "cn=partitions,cn=configuration,dc=mydomain,dc=com", with the filter "(&(objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))" I got the result: ***Searching... ldap_search_s(ld, "cn=partitions,CN=Configuration,DC=mydomain,DC=com", 1, "(& (objectcategory=crossref)(systemFlags:1.2.840.113556.1.4.803:=5))", attrList, 0, &msg) Result <0>: (null) Matched DNs: Getting 3 entries: >> Dn: CN=65502be3-fc90-442a-83d8-4b3b91e82439,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ForestDnsZones.mydomain.com; >> Dn: CN=a3a24d3a-4782-460b-9148-86ac2d86b9ae,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: ad01.mydomain.com; >> Dn: CN=f0ef5771-6225-4984-acd9-c08f582eb4e2,CN=Partitions,CN=Configuration,DC=mydomain,DC=com 1> dnsRoot: DomainDnsZones.mydomain.com; It looks like the bad partition has the name of my first domain controller 'ad01.mydomain.com'. I have googled for a while and have not been able to find any help or documentation about application partitions in Active Directory. Does anyone have any advice on how to cleanup this partition (or what the partition is for)? Does anyone know the repercussions for deleting this partition?

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >