Search Results

Search found 648 results on 26 pages for 'nunit mocks'.

Page 24/26 | < Previous Page | 20 21 22 23 24 25 26  | Next Page >

  • Lightcore IoC is returning the same instance when it should give a new one

    - by Anthony
    I have the following code using the lightcore IoC container. But it fails with "NUnit.Framework.AssertionException: Contained objects are equal" which indicates that the objects that should be transient, are not. Is this a bug in lightcore, or am I doing it wrong? [Test] public void JellybeanDispenserHasNewInstanceEachTimeWithDefault() { var builder = new ContainerBuilder(); builder.Register<IJellybeanDispenser, VanillaJellybeanDispenser>(); builder.Register<SweetVendingMachine>().ControlledBy<TransientLifecycle>(); builder.Register<SweetShop>(); builder.DefaultControlledBy<TransientLifecycle>(); IContainer container = builder.Build(); SweetShop sweetShop = container.Resolve<SweetShop>(); SweetShop sweetShop2 = container.Resolve<SweetShop>(); Assert.IsFalse(ReferenceEquals(sweetShop, sweetShop2), "Root objects are equal"); Assert.IsFalse(ReferenceEquals(sweetShop.SweetVendingMachine, sweetShop2.SweetVendingMachine), "Contained objects are equal"); Assert.IsFalse(ReferenceEquals(sweetShop.SweetVendingMachine.JellybeanDispenser, sweetShop2.SweetVendingMachine.JellybeanDispenser), "services are equal"); } PS: I would tag this question with "lightcore", but suddenly my reputation isn't good enough to make a new tag. Huh.

    Read the article

  • wpf progress bar slows 10x times serial port communications... how could be possible that?

    - by D_Guidi
    I know that this could look a dumb question, but here's my problem. I have a worker dialog that "hides" a backgroundworker, so in a worker thread I do my job, I report the progress in a standard way and then I show the results in my WPF program. The dialog contains a simply animated gif and a standard wpf progress bar, and when a progress is notified I set Value property. All lokks as usual and works well for any kind of job, like web service calls, db queries, background elaboration and so on. For my job we use also many "couplers", card readers that reads data from smart card, that are managed with native C code that access to serial port (so, I don't use .NET SerialPort object). I have some nunit tests and I read a sample card in 10 seconds, but using my actual program, under the backgroundworker and showing my worker dialog, I need 1.30 minutes to do the SAME job. I struggled into problem for days until I decide to remove the worker dialog, and without dialog I obtain the same performances of the tests! So I investigated, and It's not the dialog, not the animated gif, but the wpf progress bar! Simply the fact that a progress bar is shown (so, no animation, no Value set called, nothing of nothing) slows serialport communicatitons. Looks incredible? I've tested this behavior and it's exactly what happens.

    Read the article

  • ASP.Net MVC TDD using Moq

    - by Nicholas Murray
    I am trying to learn TDD/BDD using NUnit and Moq. The design that I have been following passes a DataService class to my controller to provide access to repositories. I would like to Mock the DataService class to allow testing of the controllers. There are lots of examples of mocking a repository passed to the controller but I can't work out how to mock a DataService class in this scenerio. Could someone please explain how to implement this? Here's a sample of the relevant code: [Test] public void Can_View_A_Single_Page_Of_Lists() { var dataService = new Mock<DataService>(); var controller = new ListsController(dataService); ... } namespace Services { public class DataService { private readonly IKeyedRepository<int, FavList> FavListRepository; private readonly IUnitOfWork unitOfWork; public FavListService FavLists { get; private set; } public DataService(IKeyedRepository<int, FavList> FavListRepository, IUnitOfWork unitOfWork) { this.FavListRepository = FavListRepository; this.unitOfWork = unitOfWork; FavLists = new FavListService(FavListRepository); } public void Commit() { unitOfWork.Commit(); } } } namespace MyListsWebsite.Controllers { public class ListsController : Controller { private readonly DataService dataService; public ListsController(DataService dataService) { this.dataService = dataService; } public ActionResult Index() { var myLists = dataService.FavLists.All().ToList(); return View(myLists); } } }

    Read the article

  • C# error casting from double to int32

    - by orfix
    using NUF = NUnit.Framework; [NUF.Test]public void DifferentCastingTest() { NUF.Assert.That((int)0.499999D, NUF.Is.EqualTo(0)); NUF.Assert.That((int)0.500000D, NUF.Is.EqualTo(0)); // !!! row 1 NUF.Assert.That((int)1.499999D, NUF.Is.EqualTo(1)); NUF.Assert.That((int)1.500000D, NUF.Is.EqualTo(1)); // !!! row 2 NUF.Assert.That(System.Convert.ToInt32(0.499999D), NUF.Is.EqualTo(0)); NUF.Assert.That(System.Convert.ToInt32(0.500000D), NUF.Is.EqualTo(0)); // !!! NUF.Assert.That(System.Convert.ToInt32(1.499999D), NUF.Is.EqualTo(1)); NUF.Assert.That(System.Convert.ToInt32(1.500000D), NUF.Is.EqualTo(2)); //!!! row 3 } The same double value (1.5D) is converted in different way by casting and Convert.ToInt32 (see row 2 and 3), and two double with same mantissa (0.5 and 1.5) is rounded in different mode (see row 1 and 2). Is it a bug?

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • TDD and WCF behavior

    - by Frederic Hautecoeur
    Some weeks ago I wanted to develop a WCF behavior using TDD. I have lost some time trying to use mocks. After a while i decided to just use a host and a client. I don’t like this approach but so far I haven’t found a good and fast solution to use Unit Test for testing a WCF behavior. To Implement my solution I had to : Create a Dummy Service Definition; Create the Dummy Service Implementation; Create a host; Create a client in my test; Create and Add the behavior; Dummy Service Definition This is just a simple service, composed of an Interface and a simple implementation. The structure is aimed to be easily customizable for my future needs.   Using Clauses : 1: using System.Runtime.Serialization; 2: using System.ServiceModel; 3: using System.ServiceModel.Channels; The DataContract: 1: [DataContract()] 2: public class MyMessage 3: { 4: [DataMember()] 5: public string MessageString; 6: } The request MessageContract: 1: [MessageContract()] 2: public class RequestMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myRequest; 9: } The response MessageContract: 1: [MessageContract()] 2: public class ResponseMessage 3: { 4: [MessageHeader(Name = "MyHeader", Namespace = "http://dummyservice/header", Relay = true)] 5: public string myHeader; 6:  7: [MessageBodyMember()] 8: public MyMessage myResponse; 9: } The ServiceContract: 1: [ServiceContract(Name="DummyService", Namespace="http://dummyservice",SessionMode=SessionMode.Allowed )] 2: interface IDummyService 3: { 4: [OperationContract(Action="Perform", IsOneWay=false, ProtectionLevel=System.Net.Security.ProtectionLevel.None )] 5: ResponseMessage DoThis(RequestMessage request); 6: } Dummy Service Implementation 1: public class DummyService:IDummyService 2: { 3: #region IDummyService Members 4: public ResponseMessage DoThis(RequestMessage request) 5: { 6: ResponseMessage response = new ResponseMessage(); 7: response.myHeader = "Response"; 8: response.myResponse = new MyMessage(); 9: response.myResponse.MessageString = 10: string.Format("Header:<{0}> and Request was <{1}>", 11: request.myHeader, request.myRequest.MessageString); 12: return response; 13: } 14: #endregion 15: } Host Creation The most simple host implementation using a Named Pipe binding. The GetBinding method will create a binding for the host and can be used to create the same binding for the client. 1: public static class TestHost 2: { 3: 4: internal static string hostUri = "net.pipe://localhost/dummy"; 5:  6: // Create Host method. 7: internal static ServiceHost CreateHost() 8: { 9: ServiceHost host = new ServiceHost(typeof(DummyService)); 10:  11: // Creating Endpoint 12: Uri namedPipeAddress = new Uri(hostUri); 13: host.AddServiceEndpoint(typeof(IDummyService), GetBinding(), namedPipeAddress); 14:  15: return host; 16: } 17:  18: // Binding Creation method. 19: internal static Binding GetBinding() 20: { 21: NamedPipeTransportBindingElement namedPipeTransport = new NamedPipeTransportBindingElement(); 22: TextMessageEncodingBindingElement textEncoding = new TextMessageEncodingBindingElement(); 23:  24: return new CustomBinding(textEncoding, namedPipeTransport); 25: } 26:  27: // Close Method. 28: internal static void Close(ServiceHost host) 29: { 30: if (null != host) 31: { 32: host.Close(); 33: host = null; 34: } 35: } 36: } Checking the service A simple test tool check the plumbing. 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Open(); 7:  8: using (ChannelFactory<IDummyService> channel = 9: new ChannelFactory<IDummyService>(TestHost.GetBinding() 10: , new EndpointAddress(TestHost.hostUri))) 11: { 12: IDummyService svc = channel.CreateChannel(); 13: try 14: { 15: RequestMessage request = new RequestMessage(); 16: request.myHeader = Guid.NewGuid().ToString(); 17: request.myRequest = new MyMessage(); 18: request.myRequest.MessageString = "I want some beer."; 19:  20: ResponseMessage response = svc.DoThis(request); 21: } 22: catch (Exception ex) 23: { 24: Assert.Fail(ex.Message); 25: } 26: } 27: host.Close(); 28: } 29: } Running the service should show that the client and the host are running fine. So far so good. Adding the Behavior Add a reference to the Behavior project and add the using entry in the test class. We just need to add the behavior to the service host : 1: [TestMethod] 2: public void TestService() 3: { 4: using (ServiceHost host = TestHost.CreateHost()) 5: { 6: host.Description.Behaviors.Add(new MyBehavior()); 7: host.Open();¨ 8: …  If you set a breakpoint in your behavior and run the test in debug mode, you will hit the breakpoint. In this case I used a ServiceBehavior. To add an Endpoint behavior you have to add it to the endpoints. 1: host.Description.Endpoints[0].Behaviors.Add(new MyEndpointBehavior()) To add a contract or an operation behavior a custom attribute should work on the service contract definition. I haven’t tried that yet.   All the code provided in this blog and in the following files are for sample use. Improvements I don’t like to instantiate a client and a service to test my behaviors. But so far I have' not found an easy way to do it. Today I am passing a type of endpoint to the host creator and it creates the right binding type. This allows me to easily switch between bindings at will. I have used the same approach to test Mex Endpoints, another post should come later for this. Enjoy !

    Read the article

  • Liskov Substitution Principle and the Oft Forgot Third Wheel

    - by Stacy Vicknair
    Liskov Substitution Principle (LSP) is a principle of object oriented programming that many might be familiar with from the SOLID principles mnemonic from Uncle Bob Martin. The principle highlights the relationship between a type and its subtypes, and, according to Wikipedia, is defined by Barbara Liskov and Jeanette Wing as the following principle:   Let be a property provable about objects of type . Then should be provable for objects of type where is a subtype of .   Rectangles gonna rectangulate The iconic example of this principle is illustrated with the relationship between a rectangle and a square. Let’s say we have a class named Rectangle that had a property to set width and a property to set its height. 1: Public Class Rectangle 2: Overridable Property Width As Integer 3: Overridable Property Height As Integer 4: End Class   We all at some point here that inheritance mocks an “IS A” relationship, and by gosh we all know square IS A rectangle. So let’s make a square class that inherits from rectangle. However, squares do maintain the same length on every side, so let’s override and add that behavior. 1: Public Class Square 2: Inherits Rectangle 3:  4: Private _sideLength As Integer 5:  6: Public Overrides Property Width As Integer 7: Get 8: Return _sideLength 9: End Get 10: Set(value As Integer) 11: _sideLength = value 12: End Set 13: End Property 14:  15: Public Overrides Property Height As Integer 16: Get 17: Return _sideLength 18: End Get 19: Set(value As Integer) 20: _sideLength = value 21: End Set 22: End Property 23: End Class   Now, say we had the following test: 1: Public Sub SetHeight_DoesNotAffectWidth(rectangle As Rectangle) 2: 'arrange 3: Dim expectedWidth = 4 4: rectangle.Width = 4 5:  6: 'act 7: rectangle.Height = 7 8:  9: 'assert 10: Assert.AreEqual(expectedWidth, rectangle.Width) 11: End Sub   If we pass in a rectangle, this test passes just fine. What if we pass in a square?   This is where we see the violation of Liskov’s Principle! A square might "IS A” to a rectangle, but we have differing expectations on how a rectangle should function than how a square should! Great expectations Here’s where we pat ourselves on the back and take a victory lap around the office and tell everyone about how we understand LSP like a boss. And all is good… until we start trying to apply it to our work. If I can’t even change functionality on a simple setter without breaking the expectations on a parent class, what can I do with subtyping? Did Liskov just tell me to never touch subtyping again? The short answer: NO, SHE DIDN’T. When I first learned LSP, and from those I’ve talked with as well, I overlooked a very important but not appropriately stressed quality of the principle: our expectations. Our inclination is to want a logical catch-all, where we can easily apply this principle and wipe our hands, drop the mic and exit stage left. That’s not the case because in every different programming scenario, our expectations of the parent class or type will be different. We have to set reasonable expectations on the behaviors that we expect out of the parent, then make sure that those expectations are met by the child. Any expectations not explicitly expected of the parent aren’t expected of the child either, and don’t register as a violation of LSP that prevents implementation. You can see the flexibility mentioned in the Wikipedia article itself: A typical example that violates LSP is a Square class that derives from a Rectangle class, assuming getter and setter methods exist for both width and height. The Square class always assumes that the width is equal with the height. If a Square object is used in a context where a Rectangle is expected, unexpected behavior may occur because the dimensions of a Square cannot (or rather should not) be modified independently. This problem cannot be easily fixed: if we can modify the setter methods in the Square class so that they preserve the Square invariant (i.e., keep the dimensions equal), then these methods will weaken (violate) the postconditions for the Rectangle setters, which state that dimensions can be modified independently. Violations of LSP, like this one, may or may not be a problem in practice, depending on the postconditions or invariants that are actually expected by the code that uses classes violating LSP. Mutability is a key issue here. If Square and Rectangle had only getter methods (i.e., they were immutable objects), then no violation of LSP could occur. What this means is that the above situation with a rectangle and a square can be acceptable if we do not have the expectation for width to leave height unaffected, or vice-versa, in our application. Conclusion – the oft forgot third wheel Liskov Substitution Principle is meant to act as a guidance and warn us against unexpected behaviors. Objects can be stateful and as a result we can end up with unexpected situations if we don’t code carefully. Specifically when subclassing, make sure that the subclass meets the expectations held to its parent. Don’t let LSP think you cannot deviate from the behaviors of the parent, but understand that LSP is meant to highlight the importance of not only the parent and the child class, but also of the expectations WE set for the parent class and the necessity of meeting those expectations in order to help prevent sticky situations.   Code examples, in both VB and C# Technorati Tags: LSV,Liskov Substitution Principle,Uncle Bob,Robert Martin,Barbara Liskov,Liskov

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • sqlite3-ruby can't make on rvm 1.8.7

    - by Josh Crews
    Upgrading to Rails 3 by starting with RVM 1.8.7. OSX 10.5.8 Output: josh-crewss-macbook:~ joshcrews$ gem install sqlite3-rubyBuilding native extensions. This could take a while...ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/bin/ruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... yes checking for rb_proc_arity()... no checking for sqlite3_column_database_name()... no checking for sqlite3_enable_load_extension()... no checking for sqlite3_load_extension()... no creating Makefile make gcc -I. -I. -I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0 -I. -I/usr/local/include -I/opt/local/include -I/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -g -O2 -fno-common -pipe -fno-common -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -c database.c database.c: In function ‘deallocate’: database.c:17: warning: implicit declaration of function ‘sqlite3_next_stmt’ database.c:17: warning: assignment makes pointer from integer without a cast database.c: In function ‘initialize’: database.c:76: warning: implicit declaration of function ‘sqlite3_open_v2’ database.c:79: error: ‘SQLITE_OPEN_READWRITE’ undeclared (first use in this function) database.c:79: error: (Each undeclared identifier is reported only once database.c:79: error: for each function it appears in.) database.c:79: error: ‘SQLITE_OPEN_CREATE’ undeclared (first use in this function) database.c: In function ‘set_sqlite3_func_result’: database.c:277: error: ‘sqlite3_int64’ undeclared (first use in this function) database.c: In function ‘rb_sqlite3_func’: database.c:311: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype database.c: In function ‘rb_sqlite3_step’: database.c:378: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype make: *** [database.o] Error 1 Gem list (these are under RVM, under system I've got lot more gems included the sqlite3-ruby that's worked for 1.5 years) josh-crewss-macbook:~ joshcrews$ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3, 2.3.8) arel (0.3.3) builder (2.1.2) bundler (0.9.25) capybara (0.3.8) configuration (1.1.0) cucumber (0.7.2) cucumber-rails (0.3.1) culerity (0.2.10) database_cleaner (0.5.2) diff-lcs (1.1.2) erubis (2.6.5) ffi (0.6.3) gherkin (1.0.30) i18n (0.4.0, 0.3.7) json_pure (1.4.3) launchy (0.3.5) mail (2.2.1) memcache-client (1.8.3) mime-types (1.16) nokogiri (1.4.2) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.4) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) rdoc (2.5.8) rspec (2.0.0.beta.10, 2.0.0.beta.8) rspec-core (2.0.0.beta.10, 2.0.0.beta.8) rspec-expectations (2.0.0.beta.10, 2.0.0.beta.8) rspec-mocks (2.0.0.beta.10, 2.0.0.beta.8) rspec-rails (2.0.0.beta.10, 2.0.0.beta.8) rubygems-update (1.3.7) selenium-webdriver (0.0.20) spork (0.8.3) term-ansicolor (1.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6) treetop (1.4.8) trollop (1.16.2) tzinfo (0.3.22) webrat (0.7.1) Version of XCode: 3.1.1 My suspicion is it has to do with "-I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0", because i686-darwin9.8.0 doesnt exist in that file

    Read the article

  • Heroku- Could not find paperclip-3.1.3 in any of the sources

    - by otchkcom
    This morning when I tried to update my website, heroku didn't let me push the app. Here's the message I got. Fetching gem metadata from http://rubygems.org/....... Fetching gem metadata from http://rubygems.org/.. Fetching git://github.com/drhenner/nifty-generators.git Could not find paperclip-3.1.3 in any of the sources ! ! Failed to install gems via Bundler. ! ! Heroku push rejected, failed to compile Ruby/rails app ! [remote rejected] master -> master (pre-receive hook declined) I don't have paperclip- 3.1.3 in my gem file. I'm not sure why it's looking for paperclip 3.1.3 Here's my gem file source 'http://rubygems.org' gem 'rails', '~> 3.2.6' gem 'asset_sync' group :assets do gem 'uglifier', '>= 1.0.3' end gem 'sass-rails', " ~> 3.2.3" gem "activemerchant", '~> 1.17.0' #, :lib => 'active_merchant' gem 'authlogic', "3.0.3" gem 'bluecloth', '~> 2.1.0' gem 'cancan', '~> 1.6.7' gem 'compass', '~> 0.12.rc.0' gem 'compass-rails' gem 'dalli', '~> 1.1.5' gem "friendly_id", "~> 3.3" gem 'haml', ">= 3.0.13"#, ">= 3.0.4"#, "2.2.21"#, gem "jquery-rails" gem 'aws-sdk' group :production do gem 'pg' gem 'thin' end gem 'nested_set', '~> 1.6.3' gem 'nokogiri', '~> 1.5.0' gem 'paperclip', '~> 3.0' gem 'prawn', '~> 0.12.0' gem 'rails3-generators', '~> 0.17.0' gem 'rmagick', :require => 'RMagick' gem 'rake', '~> 0.9.2' gem 'state_machine', '~> 1.1.2' gem 'sunspot_solr' gem 'sunspot_rails', '~> 1.3.0rc' gem 'will_paginate', '~> 3.0.0' gem 'dynamic_form' group :development do gem 'sqlite3' gem "autotest-rails-pure" gem "rails-erd" gem "ruby-debug19" end group :test, :development do gem "rspec-rails", "~> 2.8.0" gem 'capybara', :git => 'git://github.com/jnicklas/capybara.git' gem 'launchy' gem 'database_cleaner' end group :test do gem 'factory_girl', "~> 3.3.0" gem 'factory_girl_rails', "~> 3.3.0" gem 'mocha', '~> 0.10.0', :require => false gem 'rspec-rails-mocha' gem "rspec", "~> 2.8.0" gem "rspec-core", "~> 2.8.0" gem "rspec-expectations", "~> 2.8.0" gem "rspec-mocks", "~> 2.8.0" gem 'email_spec' gem "faker" gem "autotest", '~> 4.4.6' gem "autotest-rails-pure" gem "autotest-growl" gem "ZenTest", '4.6.2' end

    Read the article

  • Mocking using boost::shared_ptr and AMOP

    - by Edison Gustavo Muenz
    Hi, I'm trying to write mocks using amop. I'm using Visual Studio 2008. I have this interface class: struct Interface { virtual void Activate() = 0; }; and this other class which receives pointers to this Interface, like this: struct UserOfInterface { void execute(Interface* iface) { iface->Activate(); } }; So I try to write some testing code like this: amop::TMockObject<Interface> mock; mock.Method(&Interface::Activate).Count(1); UserOfInterface user; user.execute((Interface*)mock); mock.Verifiy(); It works! So far so good, but what I really want is a boost::shared_ptr in the execute() method, so I write this: struct UserOfInterface { void execute(boost::shared_ptr<Interface> iface) { iface->Activate(); } }; How should the test code be now? I tried some things, like: amop::TMockObject<Interface> mock; mock.Method(&Interface::Activate).Count(1); UserOfInterface user; boost::shared_ptr<Interface> mockAsPtr((Interface*)mock); user.execute(mockAsPtr); mock.Verifiy(); It compiles, but obviously crashes, since at the end of the scope the variable 'mock' gets double destroyed (because of the stack variable 'mock' and the shared_ptr). I also tried to create the 'mock' variable on the heap: amop::TMockObject<Interface>* mock(new amop::TMockObject<Interface>); mock->Method(&Interface::Activate).Count(1); UserOfInterface user; boost::shared_ptr<Interface> mockAsPtr((Interface*)*mock); user.execute(mockAsPtr); mock->Verifiy(); But it doesn't work, somehow it enters an infinite loop, before I had a problem with boost not finding the destructor for the mocked object when the shared_ptr tried to delete the object. Has anyone used amop with boost::shared_ptr successfully?

    Read the article

  • Mock static method Activator.CreateInstance to return a mock of another class

    - by Jeep87c
    I have this factory class and I want to test it correctly. Let's say I have an abstract class which have many child (inheritance). As you can see in my Factory class the method BuildChild, I want to be able to create an instance of a child class at Runtime. I must be able to create this instance during Runtime because the type won't be know before runtime. And, I can NOT use Unity for this project (if so, I would not ask how to achieve this). Here's my Factory class that I want to test: public class Factory { public AnAbstractClass BuildChild(Type childType, object parameter) { AnAbstractClass child = (AnAbstractClass) Activator.CreateInstance(childType); child.Initialize(parameter); return child; } } To test this, I want to find a way to Mock Activator.CreateInstance to return my own mocked object of a child class. How can I achieve this? Or maybe if you have a better way to do this without using Activator.CreateInstance (and Unity), I'm opened to it if it's easier to test and mock! I'm currently using Moq to create my mocks but since Activator.CreateInstance is a static method from a static class, I can't figure out how to do this (I already know that Moq can only create mock instances of objects). I took a look at Fakes from Microsoft but without success (I had some difficulties to understand how it works and to find some well explained examples). Please help me! EDIT: I need to mock Activator.CreateInstance because I want to force this method to return another mocked object. The correct thing I want is only to stub this method (not to mock it). So when I test BuildChild like this: [TestMethod] public void TestBuildChild() { var mockChildClass = new Mock(AChildClass); // TODO: Stub/Mock Activator.CreateInstance to return mockChildClass when called with "type" and "parameter" as follow. var type = typeof(AChildClass); var parameter = "A parameter"; var child = this._factory.BuildChild(type, parameters); } Activator.CreateInstance called with type and parameter will return my mocked object instead of creating a new instance of the real child class (not yet implemented).

    Read the article

  • asp.net mvc How to test controllers correctly

    - by Simon G
    Hi, I'm having difficulty testing controllers. Original my controller for testing looked something like this: SomethingController CreateSomethingController() { var somethingData = FakeSomethingData.CreateFakeData(); var fakeRepository = FakeRepository.Create(); var controller = new SomethingController(fakeRepository); return controller; } This works fine for the majority of testing until I got the Request.IsAjaxRequest() part of code. So then I had to mock up the HttpContext and HttpRequestBase. So my code then changed to look like: public class FakeHttpContext : HttpContextBase { bool _isAjaxRequest; public FakeHttpContext( bool isAjaxRequest = false ) { _isAjaxRequest = isAjaxRequest; } public override HttpRequestBase Request { get { string ajaxRequestHeader = ""; if ( _isAjaxRequest ) ajaxRequestHeader = "XMLHttpRequest"; var request = new Mock<HttpRequestBase>(); request.SetupGet( x => x.Headers ).Returns( new WebHeaderCollection { {"X-Requested-With", ajaxRequestHeader} } ); request.SetupGet( x => x["X-Requested-With"] ).Returns( ajaxRequestHeader ); return request.Object; } } private IPrincipal _user; public override IPrincipal User { get { if ( _user == null ) { _user = new FakePrincipal(); } return _user; } set { _user = value; } } } SomethingController CreateSomethingController() { var somethingData = FakeSomethingData.CreateFakeData(); var fakeRepository = FakeRepository.Create(); var controller = new SomethingController(fakeRepository); ControllerContext controllerContext = new ControllerContext( new FakeHttpContext( isAjaxRequest ), new RouteData(), controller ); controller.ControllerContext = controllerContext; return controller; } Now its got to that stage in my controller where I call Url.Route and Url is null. So it looks like I need to start mocking up routes for my controller. I seem to be spending more time googling on how to fake/mock objects and then debugging to make sure my fakes are correct than actual writing the test code. Is there an easier way in to test a controller? I've looked at the TestControllerBuilder from MvcContrib which helps with some of the issues but doesn't seem to do everything. Is there anything else available that will do the job and will let me concentrate on writing the tests rather than writing mocks? Thanks

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

  • The illusion of Competence

    - by tony_lombardo
    Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them. Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in.  Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN. How about some concrete examples? I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear. Win 1 Loss 0 Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented. Win 2  Loss 1 Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile. Win 2 Loss 2 Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted. Win 2 Loss 3 If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>. Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day.  The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project.  If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you.  And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Testing Workflows &ndash; Test-First

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-first.aspxThis is the second of two posts on some common strategies for approaching the job of writing tests.  The previous post covered test-after workflows where as this will focus on test-first.  Each workflow presented is a method of attack for adding tests to a project.  The more tools in your tool belt the better.  So here is a partial list of some test-first methodologies. Ping Pong Ping Pong is a methodology commonly used in pair programing.  One developer will write a new failing test.  Then they hand the keyboard to their partner.  The partner writes the production code to get the test passing.  The partner then writes the next test before passing the keyboard back to the original developer. The reasoning behind this testing methodology is to facilitate pair programming.  That is to say that this testing methodology shares all the benefits of pair programming, including ensuring multiple team members are familiar with the code base (i.e. low bus number). Test Blazer Test Blazing, in some respects, is also a pairing strategy.  The developers don’t work side by side on the same task at the same time.  Instead one developer is dedicated to writing tests at their own desk.  They write failing test after failing test, never touching the production code.  With these tests they are defining the specification for the system.  The developer most familiar with the specifications would be assigned this task. The next day or later in the same day another developer fetches the latest test suite.  Their job is to write the production code to get those tests passing.  Once all the tests pass they fetch from source control the latest version of the test project to get the newer tests. This methodology has some of the benefits of pair programming, namely lowering the bus number.  This can be good way adding an extra developer to a project without slowing it down too much.  The production coder isn’t slowed down writing tests.  The tests are in another project from the production code, so there shouldn’t be any merge conflicts despite two developers working on the same solution. This methodology is also a good test for the tests.  Can another developer figure out what system should do just by reading the tests?  This question will be answered as the production coder works there way through the test blazer’s tests. Test Driven Development (TDD) TDD is a highly disciplined practice that calls for a new test and an new production code to be written every few minutes.  There are strict rules for when you should be writing test or production code.  You start by writing a failing (red) test, then write the simplest production code possible to get the code working (green), then you clean up the code (refactor).  This is known as the red-green-refactor cycle. The goal of TDD isn’t the creation of a suite of tests, however that is an advantageous side effect.  The real goal of TDD is to follow a practice that yields a better design.  The practice is meant to push the design toward small, decoupled, modularized components.  This is generally considered a better design that large, highly coupled ball of mud. TDD accomplishes this through the refactoring cycle.  Refactoring is only possible to do safely when tests are in place.  In order to use TDD developers must be trained in how to look for and repair code smells in the system.  Through repairing these sections of smelly code (i.e. a refactoring) the design of the system emerges. For further information on TDD, I highly recommend the series “Is TDD Dead?”.  It discusses its pros and cons and when it is best used. Acceptance Test Driven Development (ATDD) Whereas TDD focuses on small unit tests that concentrate on a small piece of the system, Acceptance Tests focuses on the larger integrated environment.  Acceptance Tests usually correspond to user stories, which come directly from the customer. The unit tests focus on the inputs and outputs of smaller parts of the system, which are too low level to be of interest to the customer. ATDD generally uses the same tools as TDD.  However, ATDD uses fewer mocks and test doubles than TDD. ATDD often complements TDD; they aren’t competing methods.  A full test suite will usually consist of a large number of unit (created via TDD) tests and a smaller number of acceptance tests. Behaviour Driven Development (BDD) BDD is more about audience than workflow.  BDD pushes the testing realm out towards the client.  Developers, managers and the client all work together to define the tests. Typically different tooling is used for BDD than acceptance and unit testing.  This is done because the audience is not just developers.  Tools using the Gherkin family of languages allow for test scenarios to be described in an English format.  Other tools such as MSpec or FitNesse also strive for highly readable behaviour driven test suites. Because these tests are public facing (viewable by people outside the development team), the terminology usually changes.  You can’t get away with the same technobabble you can with unit tests written in a programming language that only developers understand.  For starters, they usually aren’t called tests.  Usually they’re called “examples”, “behaviours”, “scenarios”, or “specifications”. This may seem like a very subtle difference, but I’ve seen this small terminology change have a huge impact on the acceptance of the process.  Many people have a bias that testing is something that comes at the end of a project.  When you say we need to define the tests at the start of the project many people will immediately give that a lower priority on the project schedule.  But if you say we need to define the specification or behaviour of the system before we can start, you’ll get more cooperation.   Keep these test-first and test-after workflows in your tool belt.  With them you’ll be able to find new opportunities to apply them.

    Read the article

  • Connection Error using NHibernate 3.0 with Oracle

    - by Olu Lawrence
    I'm new to NHibernate. My first attempt is to configure and establish connection to Oracle 11.1g using ODP. For this test, I use a test fixture, but I get the following error: Inner exception: "Object reference not set to an instance of an object." Outer exception: Could not create the driver from NHibernate.Driver.OracleDataClientDriver. The test script is shown below: using IBCService.Models; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; using NUnit.Framework; namespace IBCService.Tests { [TestFixture] public class GenerateSchema_Fixture { [Test] public void Can_generate_schema() { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly(typeof(Product).Assembly); var fac = new SchemaExport(cfg); fac.Execute(false, true, false); } } } The exception occurs at the last line: fac.Execute(false, true, false); The NHibernate config is shown: <?xml version="1.0" encoding="utf-8"?> <!-- This config use Oracle Data Provider (ODP.NET) --> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2" > <session-factory name="IBCService.Tests"> <property name="connection.driver_class"> NHibernate.Driver.OracleDataClientDriver </property> <property name="connection.connection_string"> User ID=TEST;Password=test;Data Source=//RAND23:1521/RAND.PREVALENT.COM </property> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle10gDialect</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="proxyfactory.factory_class"> NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu </property> </session-factory> </hibernate-configuration> Now, if I change the NHibernate.Driver.OracleDataClientDriver to NHibernate.Driver.OracleClientDriver (Microsoft provider for Oracle), the test succeed. Once switched back to Oracle provider, whichever version, the test fails with the error stated earlier. I've spent 3 days already trying to figure out what is not in order without success. I hope someone out there could provide useful info on what I am doing wrong.

    Read the article

  • How to get NHProf reports in TeamCity running MSBUILD

    - by Jon Erickson
    I'm trying to get NHProf reports on my integration tests as a report in TeamCity I'm not sure how to get this set up correctly and my first attempts are unsuccessful. Let me know if there is any more information that would be helpful... I'm getting the following error, when trying to generate html reports with MSBUILD (which is being run by TeamCity) error MSB3073: The command "C:\CI\Tools\NHProf\NHProf.exe /CmdLineMode /File:"E:\CI\BuildServer\RMS-Winform\Group\dev\NHProfOutput.html" /ReportFormat:Html" exited with code -532459699 I tell TeamCity to run MSBUILD w/ CIBuildWithNHProf target The command line parameters that I pass from TeamCity are... /property:NHProfExecutable=%system.NHProfExecutable%;NHProfFile=%system.teamcity.build.checkoutDir%\NHProfOutput.html;NHProfReportFormat=Html The portion of my MSBUILD script that runs my tests is as follows... <UsingTask TaskName="NUnitTeamCity" AssemblyFile="$(teamcity_dotnet_nunitlauncher_msbuild_task)"/> <!-- Set Properties --> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">x86</Platform> <NHProfExecutable></NHProfExecutable> <NHProfFile></NHProfFile> <NHProfReportFormat></NHProfReportFormat> </PropertyGroup> <!-- Test Database --> <Target Name="DeployDatabase"> <!-- ... --> </Target> <!-- Database Used For Integration Tests --> <Target Name="DeployTestDatabase"> <!-- ... --> </Target> <!-- Build All Projects --> <Target Name="BuildProjects"> <MSBuild Projects="..\MySolutionFile.sln" Targets="Build"/> </Target> <!-- Stop NHProf --> <Target Name="NHProfStop"> <Exec Command="$(NHProfExecutable) /Shutdown" /> </Target> <!-- Run Unit/Integration Tests --> <Target Name="RunTests"> <CreateItem Include="..\**\bin\debug\*Tests.dll"> <Output TaskParameter="Include" ItemName="TestAssemblies" /> </CreateItem> <NUnitTeamCity Assemblies="@(TestAssemblies)" NUnitVersion="NUnit-2.5.3"/> </Target> <!-- Start NHProf --> <Target Name="NHProfStart"> <Exec Command="$(NHProfExecutable) /CmdLineMode /File:&quot;$(NHProfFile)&quot; /ReportFormat:$(NHProfReportFormat)" /> </Target> <Target Name="CIBuildWithNHProf" DependsOnTargets="BuildProjects;DeployTestDatabase;NHProfStart;RunTests;NHProfStop;DeployDatabase"> </Target>

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • NHibernate Tutorial Run-Time Error: HibernateException

    - by Kashif
    I'm a newbie at NHibernate so please go easy on me if I have asked a stupid question... I am following the tutorial for NHibernate posted here and am getting a run-time error of type "HibernateException" The code in question looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using FirstSolution; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; using NUnit.Framework; namespace FirstSolution.Tests { [TestFixture] public class GenerateSchema_Fixture { [Test] public void Can_generate_schema() { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly(typeof(Product).Assembly); new SchemaExport(cfg).Execute(false, true, false); } } } The line I am getting the error at is: cfg.AddAssembly(typeof(Product).Assembly); The inner-most exception is: The IDbCommand and IDbConnection implementation in the assembly System.Data.SqlServerCe could not be found And here's my stack trace: at NHibernate.Connection.ConnectionProvider.ConfigureDriver(IDictionary`2 settings) at NHibernate.Connection.ConnectionProvider.Configure(IDictionary`2 settings) at NHibernate.Connection.ConnectionProviderFactory.NewConnectionProvider(IDictionary`2 settings) at NHibernate.Tool.hbm2ddl.SchemaExport.Execute(Action`1 scriptAction, Boolean export, Boolean justDrop) at NHibernate.Tool.hbm2ddl.SchemaExport.Execute(Boolean script, Boolean export, Boolean justDrop) at FirstSolution.Tests.GenerateSchema_Fixture.Can_generate_schema() in C:\Users\Kash\Documents\Visual Studio 2010\Projects\FirstSolution\FirstSolution\GenerateSchema_Fixture.cs:line 23 at HibernateUnitTest.Form1.button1_Click(Object sender, EventArgs e) in C:\Users\Kash\Documents\Visual Studio 2010\Projects\FirstSolution\HibernateUnitTest\Form1.cs:line 23 at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at HibernateUnitTest.Program.Main() in C:\Users\Kash\Documents\Visual Studio 2010\Projects\FirstSolution\HibernateUnitTest\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() I've made sure that System.Data.SqlServerCe has been referenced and that its Copy Local property is set to True. The error persists, however. Your help would be appreciated. Thanks.

    Read the article

  • How do I mock/fake/replace/stub a base class at unit-test time in C#?

    - by MatthewMartin
    UPDATE: I've changed the wording of the question. Previously it was a yes/no question about if a base class could be changed at runtime. I may be working on mission impossible here, but I seem to be getting close. I want to extend a ASP.NET control, and I want my code to be unit testable. Also, I'd like to be able to fake behaviors of a real Label (namely things like ID generation, etc), which a real Label can't do in an nUnit host. Here a working example that makes assertions on something that depends on a real base class and something that doesn't-- in a more realistic unit test, the test would depend on both --i.e. an ID existing and some custom behavior. Anyhow the code says it better than I can: public class LabelWrapper : Label //Runtime //public class LabelWrapper : FakeLabel //Unit Test time { private readonly LabelLogic logic= new LabelLogic(); public override string Text { get { return logic.ProcessGetText(base.Text); } set { base.Text=logic.ProcessSetText(value); } } } //Ugh, now I have to test FakeLabelWrapper public class FakeLabelWrapper : FakeLabel //Unit Test time { private readonly LabelLogic logic= new LabelLogic(); public override string Text { get { return logic.ProcessGetText(base.Text); } set { base.Text=logic.ProcessSetText(value); } } } [TestFixture] public class UnitTest { [Test] public void Test() { //Wish this was LabelWrapper label = new LabelWrapper(new FakeBase()) LabelWrapper label = new LabelWrapper(); //FakeLabelWrapper label = new FakeLabelWrapper(); label.Text = "ToUpper"; Assert.AreEqual("TOUPPER",label.Text); StringWriter stringWriter = new StringWriter(); HtmlTextWriter writer = new HtmlTextWriter(stringWriter); label.RenderControl(writer); Assert.AreEqual(1,label.ID); Assert.AreEqual("<span>TOUPPER</span>", stringWriter.ToString()); } } public class FakeLabel { virtual public string Text { get; set; } public void RenderControl(TextWriter writer) { writer.Write("<span>" + Text + "</span>"); } } //System Under Test internal class LabelLogic { internal string ProcessGetText(string value) { return value.ToUpper(); } internal string ProcessSetText(string value) { return value.ToUpper(); } }

    Read the article

  • configuring uppercut for automated build

    - by deepasun
    This is my cc.net's config file. http://confluence.public.thoughtworks.org/display/CCNET/Configuration+Preprocessor -- -- -- <!-- PROJECT STRUCTURE --> <cb:define name="WindowsFormsApplication1"> <project name="$(projectName)"> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> <artifactDirectory>$(drop_directory)\$(projectName)</artifactDirectory> <category>$(projectName)</category> <queuePriority>$(queuePriority)</queuePriority> <triggers> <intervalTrigger name="continuous" seconds="60" buildCondition="IfModificationExists" /> </triggers> <sourcecontrol type="svn"> <executable>c:\program files\subversion\bin\svn.exe</executable> <!--<trunkUrl>http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1</trunkUrl>--> <trunkUrl>$(svnPath)</trunkUrl> <workingDirectory>$(working_directory)\$(projectName)</workingDirectory> </sourcecontrol> <tasks> <exec> <executable>$(working_directory)\$(projectName)\build.bat</executable> </exec> </tasks> <publishers> <merge> <files> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\*.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\mbunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\nunit\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ncover\*-results.xml</file> <file>$(working_directory)\$(projectName)\build_output\build_artifacts\ndepend\*.xml</file> </files> </merge> <!--<email from="[email protected]" mailhost="smtp.somewhere.com" includeDetails="TRUE"> <users> <user name="YOUR NAME" group="BuildNotice" address="[email protected]" /> </users> <groups> <group name="BuildNotice" notification="change" /> </groups> </email>--> <xmllogger/> <statistics> <statisticList> <firstMatch name="Svn Revision" xpath="//modifications/modification/changeNumber" /> <firstMatch name="ILInstructions" xpath="//ApplicationMetrics/@NILInstruction" /> <firstMatch name="LinesOfCode" xpath="//ApplicationMetrics/@NbLinesOfCode" /> <firstMatch name="LinesOfComment" xpath="//ApplicationMetrics/@NbLinesOfComment" /> </statisticList> </statistics> <modificationHistory onlyLogWhenChangesFound="true" /> <rss/> </publishers> </project> </cb:define> <cb:WindowsFormsApplication1 projectname="WindowsFormsApplication1" queuepriority="80" svnpath="http://192.168.1.8/trainingrepos/deepasundari/WindowsFormsApplication1" /> It is not producing the build directory in code_drop, but updating reports.xml with updated build.. wht is the problem?

    Read the article

< Previous Page | 20 21 22 23 24 25 26  | Next Page >