Search Results

Search found 1135 results on 46 pages for 'ioc di'.

Page 39/46 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Unit testing an MVC action method with a Cache dependency?

    - by Steve
    I’m relatively new to testing and MVC and came across a sticking point today. I’m attempting to test an action method that has a dependency on HttpContext.Current.Cache and wanted to know the best practice for achieving the “low coupling” to allow for easy testing. Here's what I've got so far... public class CacheHandler : ICacheHandler { public IList<Section3ListItem> StateList { get { return (List<Section3ListItem>)HttpContext.Current.Cache["StateList"]; } set { HttpContext.Current.Cache["StateList"] = value; } } ... I then access it like such... I'm using Castle for my IoC. public class ProfileController : ControllerBase { private readonly ISection3Repository _repository; private readonly ICacheHandler _cache; public ProfileController(ISection3Repository repository, ICacheHandler cacheHandler) { _repository = repository; _cache = cacheHandler; } [UserIdFilter] public ActionResult PersonalInfo(Guid userId) { if (_cache.StateList == null) _cache.StateList = _repository.GetLookupValues((int)ELookupKey.States).ToList(); ... Then in my unit tests I am able to mock up ICacheHandler. Would this be considered a 'best practice' and does anyone have any suggestions for other approaches? Thanks in advance. Cheers

    Read the article

  • Dependency injection in constructors

    - by andre
    Hello everyone. I'm starting a new project and setting up the base to work on. A few questions have risen and I'll probably be asking quite a few in here, hopefully I'll find some answers. First step is to handle dependencies for objects. I've decided to go with the dependency injection design pattern, to which I'm somewhat new, to handle all of this for the application. When actually coding it I came across a problem. If a class has multiple dependencies and you want to pass on multiple dependencies via the constructor (so that they cannot be changed after you instantiate the object). How do you do it without passing an array of dependencies, using call_user_func_array(), eval() or Reflection? This is what i'm looking for: <?php class DI { public function getClass($classname) { if(!$this->pool[$classname]) { # Load dependencies $deps = $this->loadDependencies($classname); # Here is where the magic should happen $instance = new $classname($dep1, $dep2, $dep3); # Add to pool $this->pool[$classname] = $instance; return $instance; } else { return $this->pool[$classname]; } } } Again, I would like to avoid the most costly methods to call the class. Any other suggestions?

    Read the article

  • anti-if campaign

    - by Andrew Siemer
    I recently ran against a very interesting site that expresses a very interesting idea - the anti-if campaign. You can see this here at www.antiifcampaign.com. I have to agree that complex nested IF statements are an absolute pain in the rear. I am currently on a project that up until very recently had some crazy nested IFs that scrolled to the right for quite a ways. We cured our issues in two ways - we used Windows Workflow Foundation to address routing (or workflow) concerns. And we are in the process of implementing all of our business rules utilizing ILOG Rules for .NET (recently purchased by IBM!!). This for the most part has cured our nested IF pains...but I find myself wondering how many people cure their pains in the manner that the good folks at the AntiIfCampaign suggest (see an example here) by creating numerous amounts of abstract classes to represent a given scenario that was originally covered by the nested IF. I wonder if another way to address the removal of this complexity might also be in using an IoC container such as StructureMap to move in and out of different bits of functionality. Either way... Question: Given a scenario where I have a nested complex IF or SWITCH statement that is used to evaluate a given type of thing (say evaluating an Enum) to determine how I want to handle the processing of that thing by enum type - what are some ways to do the same form of processing without using the IF or SWITCH hierarchical structure? public enum WidgetTypes { Type1, Type2, Type3, Type4 } ... WidgetTypes _myType = WidgetTypes.Type1; ... switch(_myType) { case WidgetTypes.Type1: //do something break; case WidgetTypes.Type2: //do something break; //etc... }

    Read the article

  • What is the difference between Inversion of Control and Dependency injection in C++?

    - by rlbond
    I've been reading recently about DI and IoC in C++. I am a little confused (even after reading related questions here on SO) and was hoping for some clarification. It seems to me that being familiar with the STL and Boost leads to use of dependency injection quite a bit. For example, let's say I made a function that found the mean of a range of numbers: template <typename Iter> double mean(Iter first, Iter last) { double sum = 0; size_t number = 0; while (first != last) { sum += *(first++); ++number; } return sum/number; }; Is this dependency injection? Inversion of control? Neither? Let's look at another example. We have a class: class Dice { public: typedef boost::mt19937 Engine; Dice(int num_dice, Engine& rng) : n_(num_dice), eng_(rng) {} int roll() { int sum = 0; for (int i = 0; i < num_dice; ++i) sum += boost::uniform_int<>(1,6)(eng_); return sum; } private: Engine& eng_; int n_; }; This seems like dependency injection. But is it inversion of control? Also, if I'm missing something, can someone help me out?

    Read the article

  • C#; On casting to the SAME class that came from another assembly

    - by G. Stoynev
    For complete separation/decoupling, I've implemented a DAL in an assebly that is simply being copied over via post-build event to the website BIN folder. The website then on Application Start loads that assembly via System.Reflection.Assembly.LoadFile. Again, using reflection, I construct a couple of instances from classes in that assembly. I then store a reference to these instances in the session (HttpContext.Current.Items) Later, when I try to get the object stored in the session, I am not able to cast them to their own types (was trying interfaces initially, but for debugging tried to cast to THEIR OWN TYPES), getting this error: [A]DAL_QSYSCamper.NHibernateSessionBuilder cannot be cast to [B] DAL_QSYSCamper.NHibernateSessionBuilder. Type A originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'Default' at location 'C:\Users\dull.anomal\AppData\Local\Temp\Temporary ASP.NET Files\root\ad6e8bff\70fa2384\assembly\dl3\aaf7a5b0\84f01b09_b10acb01\DAL_QSYSCamper.DLL'. Type B originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\dull.anomal\Documents\Projects\QSYS\Deleteme\UI\MVCClient\bin\DAL_QSYSCa mper.DLL'. This is happening while debugging in VS - VS manages to stop into the source DAL project even though I've loaded from assembly and the project is not refferenced by the website project (they're both in the solution). I do understand the error, but I don't understand how and why the assembly is being used/loaded from two locations - I only load it once from the file and there's no referrence to the project. Should mention that I also use Windsor for DI. The object that tries to extract the object from the session is A) from a class from that DAL assembly; B) is injected into a website class by Windsor. I will work on adding some sample code to this question, but wanted to put it out in case it's obvious what I do wrong.

    Read the article

  • Stuck trying to get Log4Net to work with Dependency Injection

    - by Pure.Krome
    I've got a simple winform test app i'm using to try some Log4Net Dependency Injection stuff. I've made a simple interface in my Services project :- public interface ILogging { void Debug(string message); // snip the other's. } Then my concrete type will be using Log4Net... public class Log4NetLogging : ILogging { private static ILog Log4Net { get { return LogManager.GetLogger( MethodBase.GetCurrentMethod().DeclaringType); } } public void Debug(string message) { if (Log4Net.IsDebugEnabled) { Log4Net.Debug(message); } } } So far so good. Nothing too hard there. Now, in a different project (and therefore namesapce), I try and use this ... public partial class Form1 : Form { public Form1() { FileInfo fileInfo = new FileInfo("Log4Net.config"); log4net.Config.XmlConfigurator.Configure(fileInfo); } private void Foo() { // This would be handled with DI, but i've not set it up // (on the constructor, in this code example). ILogging logging = new Log4NetLogging(); logging.Debug("Test message"); } } Ok .. also pretty simple. I've hardcoded the ILogging instance but that is usually dependency injected via the constructor. Anyways, when i check this line of code... return LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType); the DeclaringType type value is of the Service namespace, not the type of the Form (ie. X.Y.Z.Form1) which actually called the method. Without passing the type INTO method as another argument, is there anyway using reflection to figure out the real method that called it?

    Read the article

  • Single website multiple connection strings using asp mvc 2 and nhibernate

    - by jjjjj
    Hi In my website i use ASP MVC 2 + Fluent NHibernate as orm, StructureMap for IoC container. There are several databases with identical metadata(and so entities and mappings are the same). On LogOn page user fiils in login, password, rememberme and chooses his server from dropdownlist (in fact he chooses database). Web.config contains all connstrings and we can assume that they won't be changed in run-time. I suppose that it is required to have one session factory per database. Before using multiple databases, i loaded classes to my StructureMap ObjectFactory in Application_Start ObjectFactory.Initialize(init => init.AddRegistry<ObjectRegistry>()); ObjectFactory.Configure(conf => conf.AddRegistry<NhibernateRegistry>()); NhibernateRegistry class: public class NhibernateRegistry : Registry { public NhibernateRegistry() { var sessionFactory = NhibernateConfiguration.Configuration.BuildSessionFactory(); For<Configuration>().Singleton().Use( NhibernateConfiguration.Configuration); For<ISessionFactory>().Singleton().Use(sessionFactory); For<ISession>().HybridHttpOrThreadLocalScoped().Use( ctx => ctx.GetInstance<ISessionFactory>().GetCurrentSession()); } } In Application_BeginRequest i bind opened nhibernate session to asp session(nhibernate session per request) and in EndRequest i unbind them: protected void Application_BeginRequest( object sender, EventArgs e) { CurrentSessionContext.Bind(ObjectFactory.GetInstance<ISessionFactory>().OpenSession()); } Q1: How can i realize what SessionFactory should i use according to authenticated user? is it something like UserData filled with database name (i use simple FormsAuthentication) For logging i use log4net, namely AdoNetAppender which contains connectionString(in xml, of course). Q2: How can i manage multiple connection strings for this database appender, so logs would be written to current database? I have no idea how to do that except changing xml all the time and reseting xml configuration, but its really bad solution.

    Read the article

  • C# Multiple constraints

    - by John
    I have an application with lots of generics and IoC. I have an interface like this: public interface IRepository<TType, TKeyType> : IRepo Then I have a bunch of tests for my different implementations of IRepository. Many of the objects have dependencies on other objects so for the purpose of testing I want to just grab one that is valid. I can define a separate method for each of them: public static EmailType GetEmailType() { return ContainerManager.Container.Resolve<IEmailTypeRepository>().GetList().FirstOrDefault(); } But I want to make this generic so it can by used to get any object from the repository it works with. I defined this: public static R GetItem<T, R>() where T : IRepository<R, int> { return ContainerManager.Container.Resolve<T>().GetList().FirstOrDefault(); } This works fine for the implementations that use an integer for the key. But I also have repositories that use string. So, I do this now: public static R GetItem<T, R, W>() where T : IRepository<R, W> This works fine. But I'd like to restrict 'W' to either int or string. Is there a way to do that? The shortest question is, can I constrain a generic parameter to one of multiple types?

    Read the article

  • new Stateful session bean instance without calling lookup

    - by kislo_metal
    Scenario: I have @Singleton UserFactory (@Stateless could be) , its method createSession() generating @Stateful UserSession bean by manual lookup. If I am injecting by DI @EJB - i will get same instance during calling fromFactory() method(as it should be) What I want - is to get new instance of UserSession without preforming lookup. Q1: how could I call new instance of @Stateful session bean? Code: @Singleton @Startup @LocalBean public class UserFactory { @EJB private UserSession session; public UserFactory() { } @Schedule(second = "*/1", minute = "*", hour = "*") public void creatingInstances(){ try { InitialContext ctx = new InitialContext(); UserSession session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } } @Schedule(second = "*/1", minute = "*", hour = "*") public void fromFactory(){ System.out.println("in singleton UUID " +session.getSessionUUID()); } public UserSession creatSession(){ UserSession session2 = null; try { InitialContext ctx = new InitialContext(); session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } return session2; } } As I understand, calling of session.getClass().newInstance(); is not a best idea Q2 : is it true? I am using glassfish v3, ejb 3.1.

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

  • Ninject: How do I inject into a class library ?

    - by DennyDotNet
    To start I'm using Ninject 1.5. I have two projects: Web project and a Class library. My DI configuration is within the Web project. Within my class library I have the following defined: public interface ICacheService<T> { string Identifier { get; } T Get(); void Set( T objectToCache, TimeSpan timeSpan ); bool Exists(); } And then a concrete class called CategoryCacheService. In my web project I bind the two: Bind( typeof( ICacheService<List<Category>> ) ).To( typeof(CategoryCacheService)).Using<SingletonBehavior>(); In my class library I have extension methods for the HtmlHelper class, for example: public static class Category { [Inject] public static ICacheService Categories { get; set; } public static string RenderCategories(this HtmlHelper htmlHelper) { var c = Categories.Get(); return string.Join(", ", c.Select(s = s.Name).ToArray()); } } I've been told that you cannot inject into static properties, instead I should use Kernel.Get<() - However... Since the code above is in a class library I don't have access to the Kernel. How can I get the Kernel from this point or is there a better way of doing this? Thanks!

    Read the article

  • Adding a column to a model at runtime (without additional tables) in rails

    - by Marek
    I'm trying to give admins of my web application the ability to add some new fields to a model. The model is called Artwork and i would like to add, for instante, a test_column column at runtime. I'm just teting, so i added a simple link to do it, it will be of course parametric. I managed to do it through migrations: def test_migration_create Artwork.add_column :test_column, :integer flash[:notice] = "Added Column test_column to artworks" redirect_to :action => 'index' end def test_migration_delete Artwork.remove_column :test_column flash[:notice] = "Removed column test_column from artworks" redirect_to :action => 'index' end It works, the column gets added/ removed to/from the databse without issues. I'm using active_scaffold at the moment, so i get the test_column field in the form without adding anything. When i submit a create or an update, however, the test_column does not get updated and stay empty. Inspecting the parameters, i can see: Parameters: {"commit"=>"Update", "authenticity_token"=>"37Bo5pT2jeoXtyY1HgkEdIhglhz8iQL0i3XAx7vu9H4=", "id"=>"62", "record"=>{"number"=>"test_artwork", "author"=>"", "title"=>"Opera di Test", "test_column"=>"TEEST", "year"=>"", "description"=>""}} the test_column parameter is passed correctly. So why active record keeps ignoring it? I tried to restart the server too without success. I'm using ruby 1.8.7, rails 2.3.5, and mongrel with an sqlite3 database. Thanks

    Read the article

  • Best practices for creating a logger library using log4net. Is

    - by VolleyBall Player
    My goal is to create a log4net library that can be shared across multiple projects. In my solution which is in .net 4.0, I created a class library called Logger and referenced it from web project. Now I created a logger.config in the class library and put all the configuration in the logger.config file. I then used [assembly: log4net.Config.XmlConfigurator(Watch = true, ConfigFile = "Logger.config")] When I run the web app nothing is getting logged. So I added this line of code in web.config <add key="log4net.Internal.Debug" value="true"/> which gave me debugging info and error information Failed to find configuration section 'log4net' in the application's .config file. Check your .config file for the and elements. The configuration section should look like: I moved the configuration from logger.config to web.config and everything seems to work fine. I don't want the log4net configuration in web.config but have it logger.config as a cleaner approach. The goal is to make other projects use this library and not have to worry about configuration in every project. Now the question is, How do I do this? What am I doing wrong? Any suggestion with code example will be beneficial to everyone. FYI, I am using structure map IOC to reslove the logger before logging to it.

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • DCI: How to implement Context with Dependency Injection?

    - by ciscoheat
    Most examples of a DCI Context are implemented as a Command pattern. When using Dependency Injection though, it's useful to have the dependencies injected in the constructor and send the parameters into the executing method. Compare the Command pattern class: public class SomeContext { private readonly SomeRole _someRole; private readonly IRepository<User> _userRepository; // Everything goes into the constructor for a true encapsuled command. public SomeContext(SomeRole someRole, IRepository<User> userRepository) { _someRole = someRole; _userRepository = userRepository; } public void Execute() { _someRole.DoStuff(_userRepository); } } With the Dependency injected class: public class SomeContext { private readonly IRepository<User> _userRepository; // Only what can be injected using the DI provider. public SomeContext(IRepository<User> userRepository) { _userRepository = userRepository; } // Parameters from the executing method public void Execute(SomeRole someRole) { someRole.DoStuff(_userRepository); } } The last one seems a bit nicer, but I've never seen it implemented like this so I'm curious if there are any things to consider.

    Read the article

  • Patterns and Libraries for working with raw UI values.

    - by ProfK
    By raw values, I mean the application level values provided by UI controls, such as the Text property on a TextBox. Too often I find myself writing code to check and parse such values before they get used as a business level value, e.g. PaymentTermsNumDays. I've mitigated a lot of the spade work with rough and ready extension methods like String.ToNullableInt, but we all know that just isn't right. We can't put the whole world on String's shoulders. Do I look at tasking my UI to provide business values, using a ruleset pushed out from the server app, or open my business objects up a bit to do the required sanitising etc. as they required? Neither of these approaches sits quite right with me; the first seems closer to ideal, but quite a bit of work, while the latter doesn't show much respect to the business objects' single responsibility. The responsibilities of the UI are a closer match. Between these extremes, I could also just implement another DTO layer, an IoC container with sanitising and parsing services, derive enhanced UI controls, or stick to copy and paste inline drudgery.

    Read the article

  • Integration testing - can it be done right?

    - by Max
    I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program? What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want? An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile. To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation. What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

  • MvcExtensions - PerRequestTask

    - by kazimanzurrashid
    In the previous post, we have seen the BootstrapperTask which executes when the application starts and ends, similarly there are times when we need to execute some custom logic when a request starts and ends. Usually, for this kind of scenario we create HttpModule and hook the begin and end request events. There is nothing wrong with this approach, except HttpModules are not at all IoC containers friendly, also defining the HttpModule execution order is bit cumbersome, you either have to modify the machine.config or clear the HttpModules and add it again in web.config. Instead, you can use the PerRequestTask which is very much container friendly as well as supports execution orders. Lets few examples where it can be used. Remove www Subdomain Lets say we want to remove the www subdomain, so that if anybody types http://www.mydomain.com it will automatically redirects to http://mydomain.com. public class RemoveWwwSubdomain : PerRequestTask { public RemoveWww() { Order = DefaultOrder - 1; } protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { const string Prefix = "http://www."; Check.Argument.IsNotNull(executionContext, "executionContext"); HttpContextBase httpContext = executionContext.HttpContext; string url = httpContext.Request.Url.ToString(); bool startsWith3W = url.StartsWith(Prefix, StringComparison.OrdinalIgnoreCase); bool shouldContinue = true; if (startsWith3W) { string newUrl = "http://" + url.Substring(Prefix.Length); HttpResponseBase response = httpContext.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = newUrl; response.SuppressContent = true; shouldContinue = false; } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } As you can see, first, we are setting the order so that we do not have to execute the remaining tasks of the chain when we are redirecting, next in the ExecuteCore, we checking the whether www is present, if present we are sending a permanently moved http status code and breaking the task execution chain otherwise we are continuing with the chain. Blocking IP Address Lets take another scenario, your application is hosted in a shared hosting environment where you do not have the permission to change the IIS setting and you want to block certain IP addresses from visiting your application. Lets say, you maintain a list of IP address in database/xml files which you want to block, you have a IBannedIPAddressRepository service which is used to match banned IP Address. public class BlockRestrictedIPAddress : PerRequestTask { protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { bool shouldContinue = true; HttpContextBase httpContext = executionContext.HttpContext; if (!httpContext.Request.IsLocal) { string ipAddress = httpContext.Request.UserHostAddress; HttpResponseBase httpResponse = httpContext.Response; if (executionContext.ServiceLocator.GetInstance<IBannedIPAddressRepository>().IsMatching(ipAddress)) { httpResponse.StatusCode = (int)HttpStatusCode.Forbidden; httpResponse.StatusDescription = "IPAddress blocked."; shouldContinue = false; } } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } Managing Database Session Now, let see how it can be used to manage NHibernate session, assuming that ISessionFactory of NHibernate is already registered in our container. public class ManageNHibernateSession : PerRequestTask { private ISession session; protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { ISessionFactory factory = executionContext.ServiceLocator.GetInstance<ISessionFactory>(); session = factory.OpenSession(); return TaskContinuation.Continue; } protected override void DisposeCore() { session.Close(); session.Dispose(); } } As you can see PerRequestTask can be used to execute small and precise tasks in the begin/end request, certainly if you want to execute other than begin/end request there is no other alternate of HttpModule. That’s it for today, in the next post, we will discuss about the Action Filters, so stay tuned.

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

  • JustMock is here !!

    - by mehfuzh
    As announced earlier by Hristo Kosev at Telerik blogs , we have started giving out JustMock builds from today. This is the first of early builds before the official Q2 release and we are pretty excited to get your feedbacks. Its pretty early to say anything on it. It actually depends on your feedback. To add few, with JustMock we tried to build a mocking tool with simple and intuitive syntax as possible excluding more and more noises and avoiding any smell that can be made to your code [We are still trying everyday] and we want to make the tool even better with your help. JustMock can be used to mock virtually anything. Moreover, we left an option open that it can be used to reduce / elevate the features  just though a single click. We tried to make a strong API and make stuffs fluent and guided as possible so that you never have the chance to get de-railed. Our syntax is AAA (Arrange – Act – Assert) , we don’t believe in Record – Reply model which some of the smarter mocking tools are planning to remove from their coming release or even don’t have [its always fun to lean from each other]. Overall more signals equals more complexity , reminds me of 37 signals :-). Currently, here are the things you can do with JustMock ( will cover more in-depth in coming days) Proxied mode Mock interfaces and class with virtuals Mock properties that includes indexers Set raise event for specific calls Use matchers to control mock arguments Assert specific occurrence of a mocked calls. Assert using matchers Do recursive mocks Do Sequential mocking ( same method with argument returns different values or perform different tasks) Do strict mocking (by default and i prefer loose , so that i can use it as stubs) Elevated mode Mock static calls Mock final class Mock sealed classes Mock Extension methods Partially mock a  class member directly using Mock.Arrange Mock MsCorlib (we will support more and more members in coming days) , currently we support FileInfo, File and DateTime. These are few, you need to take a look at the test project that is provided with the build to find more [Along with the document]. Also, one of feature that will i will be using it for my next OS projects is the ability to run it separately in  proxied mode which makes it easy to redistribute and do some personal development in a more DI model and my option to elevate as it go.   I’ve surely forgotten tons of other features to mention that i will cover time but  don’t for get the URL : www.telerik.com/justmock   Finally a little mock code:   var lvMock = Mock.Create<ILoveJustMock>();    // set your goal  Mock.Arrange(() => lvMock.Response(Arg.Any<string>())).Returns((int result) => result);    //perform  string ret =  lvMock.Echo("Yes");    Assert.Equal(ret, "Yes");  // make sure everything is fine  Mock.Assert(() => lvMock.Echo("Yes"), Occurs.Once());   Hope that helps to get started,  will cover if not :-).

    Read the article

  • problem with network-manager-pptp

    - by Riuzaki90
    I've a problema with the VPA CAble connection of my university... on the website of the university there's a .sh file that set all the variables of the connection in ETC/PPP/PEERS and another .sh file that call the connection...I'm on ubuntu 11.10 and when I run the setup.sh I have this error: impossible to find network-manager-pptp these are the two file that I had talk about: #!/bin/bash echo "Creazione della connessione in corso attendere........." apt-get update apt-get install pptp-linux network-manager-pptp echo -n "Digitare la propria Username: " read USERNAME echo -n "Digitare la propria Password: " read PASSWORD pptpsetup --create UNICAL_Campus_Access --server 160.97.73.253 --username $USERNAME --password $PASSWORD echo 'pty "pptp 160.97.73.253 --nolaunchpppd"' >/etc/ppp/peers/UNICAL_Campus_Access echo 'require-mppe-128' >>/etc/ppp/peers/UNICAL_Campus_Access echo 'file /etc/ppp/options.pptp'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'name '$USERNAME''>>/etc/ppp/peers/UNICAL_Campus_Access echo 'remotename PPTP'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'ipparam UNICAL_Campus_Access'>>/etc/ppp/peers/UNICAL_Campus_Access echo $USERNAME' PPTP '$PASSWORD' *'>>/etc/ppp/chap-secrets rm /etc/ppp/options.pptp echo '###############################################################################'>/etc/ppp/options.pptp echo '# $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# Sample PPTP PPP options file /etc/ppp/options.pptp'>>/etc/ppp/options.pptp echo '# Options used by PPP when a connection is made by a PPTP client.'>>/etc/ppp/options.pptp echo '# This file can be referred to by an /etc/ppp/peers file for the tunnel.'>>/etc/ppp/options.pptp echo '# Changes are effective on the next connection. See "man pppd".'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# You are expected to change this file to suit your system. As'>>/etc/ppp/options.pptp echo '# packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/'>>/etc/ppp/options.pptp echo '# and the kernel MPPE module available from the CVS repository also on'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe.'>>/etc/ppp/options.pptp echo '###############################################################################'>>/etc/ppp/options.pptp echo '# Lock the port'>>/etc/ppp/options.pptp echo 'lock'>>/etc/ppp/options.pptp echo '# Authentication'>>/etc/ppp/options.pptp echo '# We do not need the tunnel server to authenticate itself'>>/etc/ppp/options.pptp echo 'noauth'>>/etc/ppp/options.pptp echo '#We won"t do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2'>>/etc/ppp/options.pptp echo '#(you may need to remove these refusals if the server is not using MPPE)'>>/etc/ppp/options.pptp echo 'refuse-pap'>>/etc/ppp/options.pptp echo 'refuse-eap'>>/etc/ppp/options.pptp echo 'refuse-chap'>>/etc/ppp/options.pptp echo 'refuse-mschap'>>/etc/ppp/options.pptp echo '# Compression Turn off compression protocols we know won"t be used'>>/etc/ppp/options.pptp echo 'nobsdcomp'>>/etc/ppp/options.pptp echo 'nodeflate'>>/etc/ppp/options.pptp echo '# Encryption'>>/etc/ppp/options.pptp echo '# (There have been multiple versions of PPP with encryption support,'>>/etc/ppp/options.pptp echo '# choose with of the following sections you will use. Note that MPPE'>>/etc/ppp/options.pptp echo '# requires the use of MSCHAP-V2 during authentication)'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras'>>/etc/ppp/options.pptp echo '# ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#require-mppe-128'>>/etc/ppp/options.pptp echo '#}}}'>>/etc/ppp/options.pptp echo '# http://polbox.com/h/hs001/ fork from PPP project by Jan Dubiec'>>/etc/ppp/options.pptp echo '#ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#mppe required,stateless'>>/etc/ppp/options.pptp echo '# }}}'>>/etc/ppp/options.pptp echo "setup di 'UNICAL Campus Access' terminato correttamente" echo "per connettersi eseguire lo script 'UNICAL_Campus_Access.sh' " and the second: #!/bin/bash echo "Connessione alla Rete del Centro Residenziale in corso attendere........." modprobe ppp_mppe pppd call UNICAL_Campus_Access sleep 30 tail -n 8 /var/log/messages echo "Connessione Stabilita" echo -n "Per terminare la connessione premere invio (in alternativa eseguire il commando 'killall pppd'):----> " read CONN killall pppd echo "Connessione terminata" I've correctly installed network-manager-pptp to the latest version...help?

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >