Search Results

Search found 3956 results on 159 pages for 'constructor overloading'.

Page 62/159 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Dependency Injection in ASP.NET MVC NerdDinner App using Ninject

    - by shiju
    In this post, I am applying Dependency Injection to the NerdDinner application using Ninject. The controllers of NerdDinner application have Dependency Injection enabled constructors. So we can apply Dependency Injection through constructor without change any existing code. A Dependency Injection framework injects the dependencies into a class when the dependencies are needed. Dependency Injection enables looser coupling between classes and their dependencies and provides better testability of an application and it removes the need for clients to know about their dependencies and how to create them. If you are not familiar with Dependency Injection and Inversion of Control (IoC), read Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern. The Open Source Project NerDinner is a great resource for learning ASP.NET MVC.  A free eBook provides an end-to-end walkthrough of building NerdDinner.com application. The free eBook and the Open Source Nerddinner application are extremely useful if anyone is trying to lean ASP.NET MVC. The first release of  Nerddinner was as a sample for the first chapter of Professional ASP.NET MVC 1.0. Currently the application is updating to ASP.NET MVC 2 and you can get the latest source from the source code tab of Nerddinner at http://nerddinner.codeplex.com/SourceControl/list/changesets. I have taken the latest ASP.NET MVC 2 source code of the application and applied  Dependency Injection using Ninject and Ninject extension Ninject.Web.Mvc.Ninject &  Ninject.Web.MvcNinject is available at http://github.com/enkari/ninject and Ninject.Web.Mvc is available at http://github.com/enkari/ninject.web.mvcNinject is a lightweight and a great dependency injection framework for .NET.  Ninject is a great choice of dependency injection framework when building ASP.NET MVC applications. Ninject.Web.Mvc is an extension for ninject which providing integration with ASP.NET MVC.Controller constructors and dependencies of NerdDinner application Listing 1 – Constructor of DinnersController  public DinnersController(IDinnerRepository repository) {     dinnerRepository = repository; }  Listing 2 – Constrcutor of AccountControllerpublic AccountController(IFormsAuthentication formsAuth, IMembershipService service) {     FormsAuth = formsAuth ?? new FormsAuthenticationService();     MembershipService = service ?? new AccountMembershipService(); }  Listing 3 – Constructor of AccountMembership – Concrete class of IMembershipService public AccountMembershipService(MembershipProvider provider) {     _provider = provider ?? Membership.Provider; }    Dependencies of NerdDinnerDinnersController, RSVPController SearchController and ServicesController have a dependency with IDinnerRepositiry. The concrete implementation of IDinnerRepositiry is DinnerRepositiry. AccountController has dependencies with IFormsAuthentication and IMembershipService. The concrete implementation of IFormsAuthentication is FormsAuthenticationService and the concrete implementation of IMembershipService is AccountMembershipService. The AccountMembershipService has a dependency with ASP.NET Membership Provider. Dependency Injection in NerdDinner using NinjectThe below steps will configure Ninject to apply controller injection in NerdDinner application.Step 1 – Add reference for NinjectOpen the  NerdDinner application and add  reference to Ninject.dll and Ninject.Web.Mvc.dll. Both are available from http://github.com/enkari/ninject and http://github.com/enkari/ninject.web.mvcStep 2 – Extend HttpApplication with NinjectHttpApplication Ninject.Web.Mvc extension allows integration between the Ninject and ASP.NET MVC. For this, you have to extend your HttpApplication with NinjectHttpApplication. Open the Global.asax.cs and inherit your MVC application from  NinjectHttpApplication instead of HttpApplication.   public class MvcApplication : NinjectHttpApplication Then the Application_Start method should be replace with OnApplicationStarted method. Inside the OnApplicationStarted method, call the RegisterAllControllersIn() method.   protected override void OnApplicationStarted() {     AreaRegistration.RegisterAllAreas();     RegisterRoutes(RouteTable.Routes);     ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     RegisterAllControllersIn(Assembly.GetExecutingAssembly()); }  The RegisterAllControllersIn method will enables to activating all controllers through Ninject in the assembly you have supplied .We are passing the current assembly as parameter for RegisterAllControllersIn() method. Now we can expose dependencies of controller constructors and properties to request injectionsStep 3 – Create Ninject ModulesWe can configure your dependency injection mapping information using Ninject Modules.Modules just need to implement the INinjectModule interface, but most should extend the NinjectModule class for simplicity. internal class ServiceModule : NinjectModule {     public override void Load()     {                    Bind<IFormsAuthentication>().To<FormsAuthenticationService>();         Bind<IMembershipService>().To<AccountMembershipService>();                  Bind<MembershipProvider>().ToConstant(Membership.Provider);         Bind<IDinnerRepository>().To<DinnerRepository>();     } } The above Binding inforamtion specified in the Load method tells the Ninject container that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider. When configuring the binding information, you can specify the object scope in you application.There are four built-in scopes available in Ninject:Transient  -  A new instance of the type will be created each time one is requested. (This is the default scope). Binding method is .InTransientScope()   Singleton - Only a single instance of the type will be created, and the same instance will be returned for each subsequent request. Binding method is .InSingletonScope()Thread -  One instance of the type will be created per thread. Binding method is .InThreadScope() Request -  One instance of the type will be created per web request, and will be destroyed when the request ends. Binding method is .InRequestScope() Step 4 – Configure the Ninject KernelOnce you create NinjectModule, you load them into a container called the kernel. To request an instance of a type from Ninject, you call the Get() extension method. We can configure the kernel, through the CreateKernel method in the Global.asax.cs. protected override IKernel CreateKernel() {     var modules = new INinjectModule[]     {         new ServiceModule()     };       return new StandardKernel(modules); } Here we are loading the Ninject Module (ServiceModule class created in the step 3)  onto the container called the kernel for performing dependency injection.Source CodeYou can download the source code from http://nerddinneraddons.codeplex.com. I just put the modified source code onto CodePlex repository. The repository will update with more add-ons for the NerdDinner application.

    Read the article

  • Using Unity Application Block – from basics to generics

    - by nmarun
    I just wanted to have one place where I list all the six Unity blogs I’ve written. Part 1: The very basics – Begin using Unity (code here) Part 2: Registering other types and resolving them (code here) Part 3: Lifetime Management (code here) Part 4: Constructor and Property or Setter Injection (code here) Part 5: Arrays (code here) Part 6: Generics (code here) Hope this helps someone (and this is the smallest blog I’ve posted till now).

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • What’s New in Delphi XE6 Regular Expressions

    - by Jan Goyvaerts
    There’s not much new in the regular expression support in Delphi XE6. The big change that should be made, upgrading to PCRE 8.30 or later and switching to the pcre16 functions that use UTF-16, still hasn’t been made. XE6 still uses PCRE 7.9 and thus continues to require conversion from the UTF-16 strings that Delphi uses natively to the UTF-8 strings that older versions of PCRE require. Delphi XE6 does fix one important issue that has plagued TRegEx since it was introduced in Delphi XE. Previously, TRegEx could not find zero-length matches. So a regex like (?m)^ that should find a zero-length match at the start of each line would not find any matches at all with TRegEx. The reason for this is that TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx sets its State property to [preNotEmpty] in its constructor, which tells it to skip zero-length matches. This is not a problem with TPerlRegEx because users of this class can change the State property. But TRegEx does not provide a way to change this property. So in Delphi XE5 and prior, TRegEx cannot find zero-length matches. In Delphi XE6 TPerlRegEx’s constructor was changed to initialize State to the empty set. This means TRegEx is now able to find zero-length matches. TRegex.Replace() using the regex (?m)^ now inserts the replacement at the start of each line, as you would expect. If you use TPerlRegEx directly, you’ll need to set State to [preNotEmpty] in your own code if you relied on its behavior to skip zero-length matches. You will need to check existing applications that use TRegEx for regular expressions that incorrectly allow zero-length matches. In XE5 and prior, TRegEx using \d* would match all numbers in a string. In XE6, the same regex still matches all numbers, but also finds a zero-length match at each position in the string. RegexBuddy 4 warns about zero-length matches on the Create panel if you set it to Detailed mode. At the bottom of the regex tree there will be a node saying either “your regular expression may find zero-length matches” or “zero-length matches will be skipped” depending on whether your application allows zero-length matches (XE6 TRegEx) or not (XE–XE5 TRegEx).

    Read the article

  • A simple Dynamic Proxy

    - by Abhijeet Patel
    Frameworks such as EF4 and MOQ do what most developers consider "dark magic". For instance in EF4, when you use a POCO for an entity you can opt-in to get behaviors such as "lazy-loading" and "change tracking" at runtime merely by ensuring that your type has the following characteristics: The class must be public and not sealed. The class must have a public or protected parameter-less constructor. The class must have public or protected properties Adhere to this and your type is magically endowed with these behaviors without any additional programming on your part. Behind the scenes the framework subclasses your type at runtime and creates a "dynamic proxy" which has these additional behaviors and when you navigate properties of your POCO, the framework replaces the POCO type with derived type instances. The MOQ framework does simlar magic. Let's say you have a simple interface:   public interface IFoo      {          int GetNum();      }   We can verify that the GetNum() was invoked on a mock like so:   var mock = new Mock<IFoo>(MockBehavior.Default);   mock.Setup(f => f.GetNum());   var num = mock.Object.GetNum();   mock.Verify(f => f.GetNum());   Beind the scenes the MOQ framework is generating a dynamic proxy by implementing IFoo at runtime. the call to moq.Object returns the dynamic proxy on which we then call "GetNum" and then verify that this method was invoked. No dark magic at all, just clever programming is what's going on here, just not visible and hence appears magical! Let's create a simple dynamic proxy generator which accepts an interface type and dynamically creates a proxy implementing the interface type specified at runtime.     public static class DynamicProxyGenerator   {       public static T GetInstanceFor<T>()       {           Type typeOfT = typeof(T);           var methodInfos = typeOfT.GetMethods();           AssemblyName assName = new AssemblyName("testAssembly");           var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);           var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");           var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);              typeBuilder.AddInterfaceImplementation(typeOfT);           var ctorBuilder = typeBuilder.DefineConstructor(                     MethodAttributes.Public,                     CallingConventions.Standard,                     new Type[] { });           var ilGenerator = ctorBuilder.GetILGenerator();           ilGenerator.EmitWriteLine("Creating Proxy instance");           ilGenerator.Emit(OpCodes.Ret);           foreach (var methodInfo in methodInfos)           {               var methodBuilder = typeBuilder.DefineMethod(                   methodInfo.Name,                   MethodAttributes.Public | MethodAttributes.Virtual,                   methodInfo.ReturnType,                   methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                   );               var methodILGen = methodBuilder.GetILGenerator();               methodILGen.EmitWriteLine("I'm a proxy");               if (methodInfo.ReturnType == typeof(void))               {                   methodILGen.Emit(OpCodes.Ret);               }               else               {                   if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)                   {                       MethodInfo getMethod = typeof(Activator).GetMethod(/span>"CreateInstance",new Type[]{typeof((Type)});                                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                       methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);                       methodILGen.Emit(OpCodes.Call, typeofype).GetMethod("GetTypeFromHandle"));  ));                       methodILGen.Emit(OpCodes.Callvirt, getMethod);                       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);                                                              }                 else                   {                       methodILGen.Emit(OpCodes.Ldnull);                   }                   methodILGen.Emit(OpCodes.Ret);               }               typeBuilder.DefineMethodOverride(methodBuilder, methodInfo);           }                     Type constructedType = typeBuilder.CreateType();           var instance = Activator.CreateInstance(constructedType);           return (T)instance;       }   }   Dynamic proxies are created by calling into the following main types: AssemblyBuilder, TypeBuilder, Modulebuilder and ILGenerator. These types enable dynamically creating an assembly and emitting .NET modules and types in that assembly, all using IL instructions. Let's break down the code above a bit and examine it piece by piece                Type typeOfT = typeof(T);              var methodInfos = typeOfT.GetMethods();              AssemblyName assName = new AssemblyName("testAssembly");              var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);              var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");              var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);   We are instructing the runtime to create an assembly caled "test.dll"and in this assembly we then emit a new module called "testModule". We then emit a new type definition of name "typeName"Proxy into this new module. This is the definition for the "dynamic proxy" for type T                 typeBuilder.AddInterfaceImplementation(typeOfT);               var ctorBuilder = typeBuilder.DefineConstructor(                         MethodAttributes.Public,                         CallingConventions.Standard,                         new Type[] { });               var ilGenerator = ctorBuilder.GetILGenerator();               ilGenerator.EmitWriteLine("Creating Proxy instance");               ilGenerator.Emit(OpCodes.Ret);   The newly created type implements type T and defines a default parameterless constructor in which we emit a call to Console.WriteLine. This call is not necessary but we do this so that we can see first hand that when the proxy is constructed, when our default constructor is invoked.   var methodBuilder = typeBuilder.DefineMethod(                      methodInfo.Name,                      MethodAttributes.Public | MethodAttributes.Virtual,                      methodInfo.ReturnType,                      methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                      );   We then iterate over each method declared on type T and add a method definition of the same name into our "dynamic proxy" definition     if (methodInfo.ReturnType == typeof(void))   {       methodILGen.Emit(OpCodes.Ret);   }   If the return type specified in the method declaration of T is void we simply return.     if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)   {                               MethodInfo getMethod = typeof(Activator).GetMethod("CreateInstance",                                                         new Type[]{typeof(Type)});                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                                                     methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);       methodILGen.Emit(OpCodes.Call, typeof(Type).GetMethod("GetTypeFromHandle"));       methodILGen.Emit(OpCodes.Callvirt, getMethod);       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);   }   If the return type in the method declaration of T is either a value type or an enum, then we need to create an instance of the value type and return that instance the caller. In order to accomplish that we need to do the following: 1) Get a handle to the Activator.CreateInstance method 2) Declare a local variable which represents the Type of the return type(i.e the type object of the return type) specified on the method declaration of T(obtained from the MethodInfo) and push this Type object onto the evaluation stack. In reality a RuntimeTypeHandle is what is pushed onto the stack. 3) Invoke the "GetTypeFromHandle" method(a static method in the Type class) passing in the RuntimeTypeHandle pushed onto the stack previously as an argument, the result of this invocation is a Type object (representing the method's return type) which is pushed onto the top of the evaluation stack. 4) Invoke Activator.CreateInstance passing in the Type object from step 3, the result of this invocation is an instance of the value type boxed as a reference type and pushed onto the top of the evaluation stack. 5) Unbox the result and place it into the local variable of the return type defined in step 2   methodILGen.Emit(OpCodes.Ldnull);   If the return type is a reference type then we just load a null onto the evaluation stack   methodILGen.Emit(OpCodes.Ret);   Emit a a return statement to return whatever is on top of the evaluation stack(null or an instance of a value type) back to the caller     Type constructedType = typeBuilder.CreateType();   var instance = Activator.CreateInstance(constructedType);   return (T)instance;   Now that we have a definition of the "dynamic proxy" implementing all the methods declared on T, we can now create an instance of the proxy type and return that out typed as T. The caller can now invoke the generator and request a dynamic proxy for any type T. In our example when the client invokes GetNum() we get back "0". Lets add a new method on the interface called DayOfWeek GetDay()   public interface IFoo      {          int GetNum();          DayOfWeek GetDay();      }   When GetDay() is invoked, the "dynamic proxy" returns "Sunday" since that is the default value for the DayOfWeek enum This is a very trivial example of dynammic proxies, frameworks like MOQ have a way more sophisticated implementation of this paradigm where in you can instruct the framework to create proxies which return specified values for a method implementation.

    Read the article

  • Your interesting code tricks/ conventions? [closed]

    - by Paul
    What interesting conventions, rules, tricks do you use in your code? Preferably some that are not so popular so that the rest of us would find them as novelties. :) Here's some of mine... Input and output parameters This applies to C++ and other languages that have both references and pointers. This is the convention: input parameters are always passed by value or const reference; output parameters are always passed by pointer. This way I'm able to see at a glance, directly from the function call, what parameters might get modified by the function: Inspiration: Old C code int a = 6, b = 7, sum = 0; calculateSum(a, b, &sum); Ordering of headers My typical source file begins like this (see code below). The reason I put the matching header first is because, in case that header is not self-sufficient (I forgot to include some necessary library, or forgot to forward declare some type or function), a compiler error will occur. // Matching header #include "example.h" // Standard libraries #include <string> ... Setter functions Sometimes I find that I need to set multiple properties of an object all at once (like when I just constructed it and I need to initialize it). To reduce the amount of typing and, in some cases, improve readability, I decided to make my setters chainable: Inspiration: Builder pattern class Employee { public: Employee& name(const std::string& name); Employee& salary(double salary); private: std::string name_; double salary_; }; Employee bob; bob.name("William Smith").salary(500.00); Maybe in this particular case it could have been just as well done in the constructor. But for Real WorldTM applications, classes would have lots more fields that should be set to appropriate values and it becomes unmaintainable to do it in the constructor. So what about you? What personal tips and tricks would you like to share?

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • libgdx arrays onTouch() method and delays for objects

    - by johnny-b
    i am trying to create random bullets but it is not working for some reason. also how can i make a delay so the bullets come every 30 seconds or 1 minute???? also the onTouch method does not work and it is not taking the bullet away???? shall i put the array in the GameRender class? thanks public class GameWorld { public static Ball ball; private Bullet bullet1; private ScrollHandler scroller; private Array<Bullet> bullets = new Array<Bullet>(); public GameWorld() { ball = new Ball(280, 273, 32, 32); bullet = new Bullet(-300, 200); scroller = new ScrollHandler(0); bullets.add(new Bullet(bullet.getX(), bullet.getY())); bullets = new Array<Bullet>(); Bullet bullet = null; float bulletX = 0.0f; float bulletY = 0.0f; for (int i=0; i < 10; i++) { bulletX = MathUtils.random(-10, 10); bulletY = MathUtils.random(-10, 10); bullet = new Bullet(bulletX, bulletY); bullets.add(bullet); } } public void update(float delta) { ball.update(delta); bullet.update(delta); scroller.update(delta); } public static Ball getBall() { return ball; } public ScrollHandler getScroller() { return scroller; } public Bullet getBullet1() { return bullet1; } } i also tried this and it is not working, i used this in the GameRender class Array<Bullet> enemies=new Array<Bullet>(); //in the constructor of the class enemies.add(new Bullet(bullet.getX(), bullet.getY())); // this throws an exception for some reason??? this is in the render method for(int i=0; i<bullet.size; i++) bullet.get(i).draw(batcher); //this i am using in any method that will allow me from the constructor to update to render for(int i=0; i<bullet.size; i++) bullet.get(i).update(delta); this is not taking the bullet out @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { for(int i=0; i<bullet.size; i++) if(bullet.get(i).getBounds().contains(screenX,screenY)) bullet.removeIndex(i--); return false; } thanks for the help anyone.

    Read the article

  • Using elapsed time for SlowMo in XNA

    - by Dave Voyles
    I'm trying to create a slow-mo effect in my pong game so that when a player is a button the paddles and ball will suddenly move at a far slower speed. I believe my understanding of the concepts of adjusting the timing in XNA are done, but I'm not sure of how to incorporate it into my design exactly. The updates for my bats (paddles) are done in my Bat.cs class: /// Controls the bat moving up the screen /// </summary> public void MoveUp() { SetPosition(Position + new Vector2(0, -moveSpeed)); } /// <summary> /// Controls the bat moving down the screen /// </summary> public void MoveDown() { SetPosition(Position + new Vector2(0, moveSpeed)); } /// <summary> /// Updates the position of the AI bat, in order to track the ball /// </summary> /// <param name="ball"></param> public virtual void UpdatePosition(Ball ball) { size.X = (int)Position.X; size.Y = (int)Position.Y; } While the rest of my game updates are done in my GameplayScreen.cs class (I'm using the XNA game state management sample) Class GameplayScreen { ........... bool slow; .......... public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) base.Update(gameTime, otherScreenHasFocus, false); if (IsActive) { // SlowMo Stuff Elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; if (Slowmo) Elapsed *= .8f; MoveTimer += Elapsed; double elapsedTime = gameTime.ElapsedGameTime.TotalMilliseconds; if (Keyboard.GetState().IsKeyDown(Keys.Up)) slow = true; else if (Keyboard.GetState().IsKeyDown(Keys.Down)) slow = false; if (slow == true) elapsedTime *= .1f; // Updating bat position leftBat.UpdatePosition(ball); rightBat.UpdatePosition(ball); // Updating the ball position ball.UpdatePosition(); and finally my fixed time step is declared in the constructor of my Game1.cs Class: /// <summary> /// The main game constructor. /// </summary> public Game1() { IsFixedTimeStep = slow = false; } So my question is: Where do I place the MoveTimer or elapsedTime, so that my bat will slow down accordingly?

    Read the article

  • Rhythmbox is not launching

    - by Somogyi László
    I'm using Rhythmbox 2.99 in Ubuntu 13.10, and when I first launch the application it searches for music and works like a charm, but if I try to launch it later again nothing happens. When I run it in terminal I get this: $ rhythmbox (rhythmbox:8860): GLib-GObject-CRITICAL **: Custom constructor for class SoupServer returned NULL (which is invalid). Unable to remove object from construction_objects list, so memory was probably just leaked. Please use GInitable instead. Segmentation fault Any ideas how to make it work again?

    Read the article

  • What is a good design model for my new class?

    - by user66662
    I am a beginning programmer who, after trying to manage over 2000 lines of procedural php code, now has discovered the value of OOP. I have read a few books to get me up to speed on the beginning theory, but would like some advice on practical application. So,for example, let's say there are two types of content objects - an ad and a calendar event. what my application does is scan different websites (a predefined list), and, when it finds an ad or an event, it extracts the data and saves it to a database. All of my objects will share a $title and $description. However, the Ad object will have a $price and the Event object will have $startDate. Should I have two separate classes, one for each object? Should I have a 'superclass' with the $title and $description with two other Ad and Event classes with their own properties? The latter is at least the direction I am on now. My second question about this design is how to handle the logic that extracts the data for $title, $description, $price, and $date. For each website in my predefined list, there is a specific regex that returns the desired value for each property. Currently, I have an extremely large switch statement in my constructor which determines what website I am own, sets the regex variables accordingly, and continues on. Not only that, but now I have to repeat the logic to determine what site I am on in the constructor of each class. This doesn't feel right. Should I create another class Algorithms and store the logic there for each site? Should the functions of to handle that logic be in this class? or specific to the classes whos properties they set? I want to take into account in my design two things: 1) I will add different content objects in the future that share $title and $description, but will have their own properties, so, I want to be able to easily grow these as needed. 2) I will add more websites constantly (each with their own algorithms for data extraction) so I would like to plan efficienty managing and working with these now. I thought about extending the Ad or Event class with 'websiteX' class and store its functions there. But, this didn't feel right either as now I have to manage 100s of little website specific class files. Note, I didn't know if this was the correct site or stackoverflow was the better choice. If so, let me know and I'll post there.

    Read the article

  • Creating classed in JavaScript

    - by Renso
    Goal:Creating class instances in JavaScript is not available since you define "classes" in js with object literals. In order to create classical classes like you would in c#, ruby, java, etc, with inheritance and instances.Rather than typical class definitions using object literals, js has a constructor function and the NEW operator that will allow you to new-up a class and optionally provide initial properties to initialize the new object with.The new operator changes the function's context and behavior of the return statement.var Person = function(name) {   this.name = name;};   //Init the personvar dude= new Person("renso");//Validate the instanceassert(dude instanceof Person);When a constructor function is called with the new keyword, the context changes from global window to a new and empty context specific to the instance; "this" will refer in this case to the "dude" context.Here is class pattern that you will need to define your own CLASS emulation library:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   return _class;}var Person a new Class();Person.prototype.init = function() {};var person = new Person;In order for the class emulator to support adding functions and properties to static classes as well as object instances of People, change the emulator:var Class = function() {   var _class = function() {      this.init.apply(this, arguments);   };   _class.prototype.init = function(){};   _class.fn = _class.prototype;   _class.fn.parent = _class;   //adding class properties   _class.extend = function(obj) {      var extended = obj.extended;      for(var i in obj) {         _class[i] = obj[i];      };      if(extended) extended(_class);   };   //adding new instances   _class.include = function(obj) {      var included = obj.included;      for(var i in obj) {         _class.fn[i] = obj[i];      };      if(included) included(_class);   };   return _class;}Now you can use it to create and extend your own object instances://adding static functions to the class Personvar Person = new Class();Person.extend({   find: function(name) {/*....*/},      delete: function(id) {/*....*/},});//calling static function findvar person = Person.find('renso');   //adding properties and functions to the class' prototype so that they are available on instances of the class Personvar Person = new Class;Person.extend({   save: function(name) {/*....*/},   delete: function(id) {/*....*/}});var dude = new Person;//calling instance functiondude.save('renso');

    Read the article

  • How to deal with checked exceptions that cannot ever be thrown

    - by ammoQ
    Example: foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); Since the encoding is hardcoded and correct, the constructor will never throw the UnsupportedEncodingException declared in the specification (unless the java implementation is broken, in which case I'm lost anyway). Anyway, Java forces me to deal with that exception anyway. Currently, it looks like that try { foobar = new InputStreamReader(p.getInputStream(), "ISO-8859-1"); } catch(UnsupportedEncodingException e) { /* won't ever happen */ } Any ideas how to make it better?

    Read the article

  • Complex event system for DungeonKeeper like game

    - by paul424
    I am working on opensource GPL3 game. http://opendungeons.sourceforge.net/ , new coders would be welcome. Now there's design question regarding Event System: We want to improve the game logic, that is program a new event system. I will just repost what's settled up already on http://forum.freegamedev.net/viewtopic.php?f=45&t=3033. From the discussion came the idea of the Publisher / Subscriber pattern + "domains": My current idea is to use the subscirbers / publishers model. Its similar to Observable pattern, but instead one subscribes to Events types, not Object's Events. For each Event would like to have both static and dynamic type. Static that is its's type would be resolved by belonging to the proper inherited class from Event. That is from Event we would have EventTile, EventCreature, EvenMapLoader, EventGameMap etc. From that there are of course subtypes like EventCreature would be EventKobold, EventKnight, EventTentacle etc. The listeners would collect the event from publishers, and send them subcribers , each of them would be a global singleton. The Listeners type hierachy would exactly mirror the type hierarchy of Events. In each constructor of Event type, the created instance would notify the proper listeners. That is when calling EventKnight the proper ctor would notify the Listeners : EventListener, CreatureLisener and KnightListener. The default action for an listner would be to notify all subscribers, but there would be some exceptions , like EventAttack would notify AttackListener which would dispatch event by the dynamic part ( that is the Creature pointer or hash). Any comments ? #include <vector> class Subscriber; class SubscriberAttack; class Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<Subscriber*> subscribersList; static std::vector<Event*> eventQueue; public: Event(){ eventQueue.push_back(this); } static int subscribe(Subscriber* ss); static int unsubscribe(Subscriber* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; // class Publisher{ // }; class Subscriber{ public: int (*newEvent) (Event* ee); Subscriber( ){ Event::subscribe(this); } Subscriber( int (*fp) (Event* ee) ):newEvent(fp){ Subscriber(); } ~Subscriber(){ Event::unsubscribe(this); } }; class EventAttack: Event{ private: int foo; int bar; protected: // static std::vector<Publisher*> publishersList; static std::vector<SubscriberAttack*> subscribersList; static std::vector<EventAttack*> eventQueue; public: EventAttack(){ eventQueue.push_back(this); } static int subscribe(SubscriberAttack* ss); static int unsubscribe(SubscriberAttack* ss); //static int reg_publisher(Publisher* pp); //static int unreg_publisher(Publisher* pp); }; class AttackSubscriber :Subscriber{ public: int (*newEvent) (EventAttack* ee); AttackSubscriber( ){ EventAttack::subscribe(this); } AttackSubscriber( int (*fp) (EventAttack* ee) ):newEventAttack(fp){ AttackSubscriber(); } ~AttackSubscriber(){ EventAttack::unsubscribe(this); } }; From that point, others wanted the Subject-Observer pattern, that is one would subscribe to all event types produced by particular object. That way it came out to add the domain system : Huh, to meet the ability to listen to particular game's object events, I though of introducing entity domains . Domains are trees, which nodes are labeled by unique names for each level. ( like the www addresses ). Each Entity wanting to participate in our event system ( that is be able to publish / produce events ) should at least now its domain name. That would end up in Player1/Room/Treasury/#24 or Player1/Creature/Kobold/#3 producing events. The subscriber picks some part of a tree. For example by specifiing subtree with the root in one of the nodes like Player1/Room/* ,would subscribe us to all Players1's room's event, and Player1/Creature/Kobold/#3 would subscribe to Players' third kobold's event. Does such event system make sense to you ? I have many implementation details to ask as well, but first let's start some general discussion. Note1: Notice that in the case of a fight between two creatues fight , the creature being attacked would have to throw an event, becuase it is HE/SHE/IT who have its domain address. So that would be BeingAttackedEvent() etc. I will edit that post if some other reflections on this would come out. Note2: the existing class hierarchy might be used to get the domains addresses being build in constructor . In a ctor you would just add + ."className" to domain address. If you are in a class'es hierarchy leaf constructor one might use nextID , hash or any other charactteristic, just to make the addresses distinguishable . Note3:subscribing to all entity's Events would require knowledge of all possible events produced by this entity . This could be done in one function call, but information on E produced would have to be handled for every Entity. SmartNote4 : Finding proper subscribers in a tree would be easy. One would start in particular Leaf for example Player1/Creature/Kobold/#3 and go up one parent a time , notifiying each Subscriber in a Node ie. : Player1/Creature/Kobold/* , Player1/Creature/* , Player1/* etc, , up to a root that is /* .<<<< Note5: The Event system was needed to have some way of incorporating Angelscript code into application. So the Event dispatcher was to be a gate to A-script functions. But it came out to this one.

    Read the article

  • How do I keep user input and rendering independent of the implementation environment?

    - by alex
    I'm writing a Tetris clone in JavaScript. I have a fair amount of experience in programming in general, but am rather new to game development. I want to separate the core game code from the code that would tie it to one environment, such as the browser. My quick thoughts led me to having the rendering and input functions external to my main game object. I could pass the current game state to the rendering method, which could render using canvas, elements, text, etc. I could also map input to certain game input events, such as move piece left, rotate piece clockwise, etc. I am having trouble designing how this should be implemented in my object. Should I pass references to functions that the main object will use to render and process user input? For example... var TetrisClone = function(renderer, inputUpdate) { this.renderer = renderer || function() {}; this.inputUpdate = input || function() {}; this.state = {}; }; TetrisClone.prototype = { update: function() { // Get user input via function passed to constructor. var inputEvents = this.inputUpdate(); // Update game state. // Render the current game state via function passed to constructor. this.renderer(this.state); } }; var renderer = function(state) { // Render blocks to browser page. } var inputEvents = {}; var charCodesToEvents = { 37: "move-piece-left" /* ... */ }; document.addEventListener("keypress", function(event) { inputEvents[event.which] = true; }); var inputUpdate = function() { var translatedEvents = [], event, translatedEvent; for (event in inputEvents) { if (inputEvents.hasOwnProperty(event)) { translatedEvent = charCodesToEvents[event]; translatedEvents.push(translatedEvent); } } inputEvents = {}; return translatedEvents; } var game = new TetrisClone(renderer, inputUpdate); Is this a good game design? How would you modify this to suit best practice in regard to making a game as platform/input independent as possible?

    Read the article

  • Why shouldn't I be using public variables in my Java class?

    - by Omega
    In school, I've been told many times to stop using public for my variables. I haven't asked why yet. This question: Are Java's public fields just a tragic historical design flaw at this point? seems kinda related to this. However, they don't seem to discuss why is it "wrong", but instead focus on how can they use them instead. Look at this (unfinished) class: public class Reporte { public String rutaOriginal; public String rutaNueva; public int bytesOriginales; public int bytesFinales; public float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } } No need to understand Spanish. All this class does is hold some statistics (those public fields) and then do some operations with them (later). I will also need to be modifying those variables often. But well, since I've been told not to use public, this is what I ended up doing: public class Reporte { private String rutaOriginal; private String rutaNueva; private int bytesOriginales; private int bytesFinales; private float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } public String getRutaOriginal() { return rutaOriginal; } public String getRutaNueva() { return rutaNueva; } public int getBytesOriginales() { return bytesOriginales; } public int getBytesFinales() { return bytesFinales; } public float getGanancia() { return ganancia; } public void setRutaOriginal(String rutaOriginal) { this.rutaOriginal = rutaOriginal; } public void setRutaNueva(String rutaNueva) { this.rutaNueva = rutaNueva; } public void setBytesOriginales(int bytesOriginales) { this.bytesOriginales = bytesOriginales; } public void setBytesFinales(int bytesFinales) { this.bytesFinales = bytesFinales; } public void setGanancia(float ganancia) { this.ganancia = ganancia; } } Looks kinda pretty. But seems like a waste of time. Google searches about "When to use public in Java" and "Why shouldn't I use public in Java" seem to discuss about a concept of mutability, although I'm not really sure how to interpret such discussions. I do want my class to be mutable - all the time.

    Read the article

  • Dual monitors through 1 HDMI port

    - by Carlos
    I currently have a Dell Studio XPS 13 laptop connected to a 24" HP monitor (w2448hc). Im thinking on getting a second one, however am wondering what i need for the setup (hardware wise). Also I was wondering it there is any down side to it, or something i should be aware of. For example, image quality loss, GPU overloading, or anything important I should know. More than anything Im interested in your advice. Also the monitors do have built in speakers (HDMI sound output), is the sound going to be reproduced by only one monitor or both? Specs Model: Dell Studio XPS 13 OS: Genuine Windows® 7 Home Premium 64-Bit CPU: Intel® CoreTM 2 Duo P8600 (2.4GHz, 3MB L2 Cache, 1067MHz FSB) Chipset: NVIDIA® GeForce® MCP79MX RAM: 4GB 1067MHz DDR3 SDRAM Graphics: SLi NVIDIA® GeForce® 9500M - 256MB Thanks for your advice, if there is anything additional i need to buy an you have a personal preference pass the brand name so i can check it out. Thanks!

    Read the article

  • Dual monitors though 1 HDMI port

    - by Carlos
    I currently have a Dell Studio XPS 13 laptop connected to a 24" HP monitor (w2448hc). Im thinking on getting a second one, however am wondering what i need for the setup (hardware wise). Also I was wondering it there is any down side to it, or something i should be aware of. For example, image quality loss, GPU overloading, or anything important I should know. More than anything Im interested in your advice. Also the monitors do have built in speakers (HDMI sound output), is the sound going to be reproduced by only one monitor or both? Specs Model: Dell Studio XPS 13 OS: Genuine Windows® 7 Home Premium 64-Bit CPU: Intel® CoreTM 2 Duo P8600 (2.4GHz, 3MB L2 Cache, 1067MHz FSB) Chipset: NVIDIA® GeForce® MCP79MX RAM: 4GB 1067MHz DDR3 SDRAM Graphics: SLi NVIDIA® GeForce® 9500M - 256MB Thanks for your advice, if there is anything additional i need to buy an you have a personal preference pass the brand name so i can check it out. Thanks!

    Read the article

  • Tomcat Custom MBean

    - by Darran
    Does anyone know how to deploy a custom MBean to Tomcat? So far I`ve found this http://www.junlu.com/list/3/8871.html. I copied my jar with my MBean to Tomcat lib directory so the Custom class loader should pick it up. I then followed the instructions but I kept getting the exception below. My MBean does definitely have a public constructor. If I removed the jar from the tomcat lib directory I get the same message which suggests its not picking up my jar or my jar is being loaded after the Apache MBean Modeler is running in Tomcat. 06-Aug-2010 12:14:23 org.apache.tomcat.util.modeler.modules.MbeansSource execute SEVERE: Error creating mbean Bean:type=Bean javax.management.NotCompliantMBeanException: MBean class must have public constructor at com.sun.jmx.mbeanserver.Introspector.testCreation(Introspector.java:127) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:2 at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:1 at com.sun.jmx.mbeanserver.JmxMBeanServer.createMBean(JmxMBeanServer.java:393) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:207) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

    Read the article

  • Life Cycle Navigator?

    - by C.W.Holeman II
    In many environments the file system directory structure and naming conventions attempt to allow one to use a file manager to navigate the life cycle of a document. This overloading of functions makes it difficult for users to handle the complexity. A file browser is a tool that lets the user navigate among files located in a directory structure to find a specific file. Whereas, when given a specific file, a life cycle navigator is a tool that lets the user navigate its life cycle from source to published copy and across versions. Does a Life Cycle Navigator exit? I see a user pointing at an object: Left mouse button displays the document Right mouse button has a Life Cycle Navigator (LCN) The LCN displays a tree for a specific document within a file manger, for example: Published 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All Source Draft 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All +Work Flow +Properties Or from a command line: $ lcn x.pdf --open_source_document | my_favorite_editor $ lcn x.pdf --show_published_version_info $ lcn x.pdf --show_previous_publish_versions_info See also, Life Cycle Navigator.

    Read the article

  • What's the best way to work out if a virtual server is overloaded?

    - by zemaj
    I have a series of virtual servers. I'm running a command to login to each one and take a look at the load averages using uptime. What's the best way to work out if load values represent overloading? I'm running on rackspace cloud, so the servers have burst capability and can be all different sizes. I'm a little stumped on how to come up with a consistent way of figuring out when I need to spin up new servers. I can do things like estimate the jobs running on each one, but I'd like a system that runs a little closer to the real resource use available on each instance, as it obviously varies quite a bit! Help greatly appreciated!

    Read the article

  • Is my laptop good enough to support my development needs? [closed]

    - by KodeSeeker
    I have an ASUS Pentium-R Dual Core CPU running at 2.20Ghz. It has 4 gb of built in ram, currently running a 64 bit Windows 7 . I just started graduate school and Im wondering whether I should go in for a new laptop or just repair the nagging battery on my current one. My requirements include - -Ability to support IDE's - I may end up running Eclipse, Visual Studio's and the like to help with my work. - Ability to run multiple VM's (not concurrently). Im currently running a Ubuntu 12 and 9 as VM's (not sure if this is overloading the system) - I'm a non gamer so I really dont care about a minor glitch caused by running a uber heavy game. -In addition I will have heavy use of Office Application Software and will be using my computer to watch movies and stream media. Looking forward to your replies and suggestions!

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >