Search Results

Search found 19856 results on 795 pages for 'inversion of control'.

Page 14/795 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • C#/VB.Net Web Browser Control Replacement

    - by Lienau
    I've been working on a project that requires that I can go around webpages with different proxies, user-agents, and clear cookies. Now after looking all around the net, it looks like there are some solutions for each of these, but I can never get them working. I was wondering if there was a wrapper for this control that fixed all of these problems or even just a different control I could include. Thanks. Edit: I tried using HTTPWebRequest, it has everything I need -Javascript

    Read the article

  • Good Source Control Hosting For Privacy?

    - by cam
    I'm looking for a very private source control/hosting solution. Short of hosting my own, and seeing as I'm the only collaborator, what is a good service for this? I'm using Git. The most important aspect is privacy. I don't want anyone to see my code, I'm simply using it for source control/backup. I will be the only developer.

    Read the article

  • Access-Control-Request-Header: - x-requested-with

    - by Tom
    Hi Guys, I am building a widget for my users and trying to get it working however I keep running into a X-Domain issue with this header. httpfox gives me - NS_ERROR_DOM_BAD_URI - and on further investigation I find that its Access-Control-Request-Method: GET Access-Control-Request-Header: x-requested-with I am not sure why its not loading but I basically call a script and then I am trying to grab some html to load it but it gets blocked with the above headers. How can I get around this prob ?

    Read the article

  • Android app to remote control Samsung Smart TVs

    - by Gopinath
    Smart TV Remote is an unofficial Android app that lets you control Samsung Smart TVs connected over a local WiFi network. This app comes very handy when you want to control your TV which is not in line of sight of your TV remote control or just want to use your mobile phone/tablet to control the TV. Setting up a TV  is very easy using auto scan feature . Once the TV is setup, you are all set to start using the app as a remote control. A traditional remote controls makes use of infra red technology and it needs to be in the line of sight of the TV receiver to work. But this app make use of WiFi technology which give it flexibility of controlling the TV as long as the mobile & TV is connected to WiFi network. It just works even if the TV is behind a wall. The App provides very easy to use options to switch between channels and separate remotes with media controls, smart hub features and a numeric key pad if you want to navigate to a channel through its number. The App also provides a home screen widget with volume controls and channel navigation options. I use  this App to control Samsung E Series Smart Tv at home and it works very well. I’m impressed by the ease at which it allows to setup a TV, support for multiple TVs, controlling the TV though I’m not in the line of sight and using volume buttons of smart phone to control volume of TV. What’s annoying and missing with the app As advertised the app works very well in controlling Samsung TVs (B-, C-, D- E-, and F-Series) except it is very painful to move mouse pointer while browsing web on TV. When you try to move mouse pointer using the App, it mouse painfully slow especially. I gave us using the app to control mouse pointer after trying couple of times. I installed this App thinking that it may help me browse web on Smart TVs, especially a key board support to type web urls. App does not supports entering text either while browsing web or searching through Smart TV apps like YouTube, App Store etc. Developers of this App never advertised keyboard support so no complaints about this. But it would be very helpful if the developers allow this app use as a keyboard and rescue me from the pain of typing text using TV Remote control. Overall this is a very nice app and worth trying out – Download Smart TV Remote from Google Play

    Read the article

  • Is there an established or defined best practice for source control branching between development and production builds?

    - by Matthew Patrick Cashatt
    Thanks for looking. I struggled in how to phrase my question, so let me give an example in hopes of making more clear what I am after: I currently work on a dev team responsible for maintaining and adding features to a web application. We have a development server and we use source control (TFS). Each day everyone checks in their code and when the code (running on the dev server) passes our QA/QC program, it goes to production. Recently, however, we had a bug in production which required an immediate production fix. The problem was that several of us developers had code checked in that was not ready for production so we had to either quickly complete and QA the code, or roll back everything, undo pending changes, etc. In other words, it was a mess. This made me wonder: Is there an established design pattern that prevents this type of scenario. It seems like there must be some "textbook" answer to this, but I am unsure what that would be. Perhaps a development branch of the code and a "release-ready" or production branch of the code?

    Read the article

  • Should I expect my team to have more than a basic proficiency with our source control system?

    - by Joshua Smith
    My company switched from Subversion to Git about three months ago. We had weeks of advance notice prior to the switch. Since I'd never used Git before (or any other DVCS), I read Pro Git and spent a little time spinning up my own repositories and playing around, so that when we switched I'd be able to keep working with minimal pain. Now I'm the 'Git guy' by default. With a couple of exceptions, most of my team still has no idea how Git works. For example, they still think of branches as complete copies of the source code, and even go so far as to clone the repo into multiple folders (one per branch). They generally look at Git as a scary black box. Given the fundamental nature of source control in our daily work (not to mention the ridiculous amount of power Git affords us), I'm of the opinion that any dev who doesn't achieve a certain level of proficiency with it is a liability. Should I expect my team to have at least some understanding of how Git works internally, and how to use it beyond the most basic pull/merge/push operations? Or am I just making something out of nothing?

    Read the article

  • Broken package ubuntuone-control-panel-qt

    - by baale7
    An interrupted installation of updates resulted in ubuntuone-control-panel-qt breaking and now I cannot install or update anything. There seems to be an error in python2.7 - ImportError: No module named _sysconfigdata_nd dpkg: error processing archive ubuntuone-control-panel-qt_13.09-0ubuntu1_all.deb (--install): When I try to reinstall the package via .deb a dpkg --audit yields: The following packages are in a mess due to serious problems during installation. They must be reinstalled for them (and any packages that depend on them) to function properly: ?python-pexpect Python module for automating interactive applications ?ubuntuone-control-panel-qt Ubuntu One Control Panel - Qt frontend ?hplip-data HP Linux Printing and Imaging - data files ?python-libxml2 Python bindings for the GNOME XML library ?apport automatically generate crash reports for debugging The following packages are only half configured, probably due to problems configuring them the first time. The configuration should be retried using dpkg --configure or the configure menu option in dselect: ?python-ubuntuone-client Ubuntu One client Python libraries ?python-ubuntuone-control-panel Ubuntu One Control Panel - Python Libraries --configure or --configure -a doesn't work Any help?

    Read the article

  • Plugins in Ops Center and Cloud Control

    - by Owen Allen
    Cloud Control just released an updated plugin for Oracle Virtual Networking, so I thought I'd mention a bit about how both Ops Center and Cloud Control use plugins. On the Ops Center side, we have a plugin to connect Ops Center to Cloud Control, letting them share monitoring data. This guide explains how to install and use that plugin. In Cloud Control, they have a more extensive collection of plugins, letting you link Cloud Control with a variety of other products. The Cloud Control library plug-in tab goes into more detail about how you can use these plugins in your environment.

    Read the article

  • WebBrowser control won't display an https site that IE8 on the same PC will

    - by Velika
    In IE8, I get the follow warning, but if I choose to continue the site displays properly. There is a problem with this website's security certificate. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. We recommend that you close this webpage and do not continue to this website. Click here to close this webpage. Continue to this website (not recommended). More information In the WebBrowser control, I get this at first: Navigation to the webpage was canceled What you can try: Refresh the page. When I hit the refresh teh page, this time, I get the same wanting as I originally get in IE8, but when I click "Continue to this website (not recommended)", the page refresh again, displaying the same warning. What can I do to get the site to display in the WebBrowser control as it does in IE8. I would've thought that the control would be using the same core logic and therefore expected the same result.

    Read the article

  • Error using ASP.NET 3.5 Chart control within a Repeater control

    - by tuseau
    Hi, I'm getting the error "Microsoft JScript runtime error: Sys.WebForms.PageRequestManagerServerErrorException: Error executing child request for ChartImg.axd" at runtime from my Chart control. I have read one solution to this error message as documented here: http://social.msdn.microsoft.com/Forums/en-US/MSWinWebChart/thread/1dc4b352-c9a5-49dc-8f35-9b176509faa1/ but this does not solve my problem. I have the Chart within a Repeater control, which is within an ajax UpdatePanel. When I take the chart out of the Repeater (but leave it in the UpdatePanel), it works. So I'm thinking it is to do with the Repeater. Anyone got any ideas? Thanks in advance.

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • Using a version control system as a data backend

    - by JacobM
    I'm involved in a project that, among other things, involves storing edits and changes to a large hierarchical document (HTML-formatted text). We want to include versioning of textual changes and of structural changes. Currently we're maintaining the tree of document sections in a relational database, but as we start working on how to manage versioning of structural changes, it's clear that we're in danger of having to write a lot of the functionality that a version control system provides. We don't want to reinvent the wheel. Is it possible that we could use an existing version control system as the data store, at least for the document itself? Presumably we could do so by writing out new versions to the filesystem, and keeping that directory under version control (and programmatically doing commits and so forth) but it would be better if we could directly interact with the repository via code. The VCS that we are most familiar with is Subversion, but I'm not thrilled with how Subversion represents changes to the directory structure -- it would be nice if we could see that a particular revision included moving a section from Chapter 2 to Chapter 6, rather than just seeing a new version of the tree. This sounds more like the way a system like Mercurial handles changes to the structure. Any advice? Do VCS's have public APIs and so forth? The project is in Java (with Spring) if it matters.

    Read the article

  • User control always crashes Visual Studio

    - by NickAldwin
    I'm trying to open a user control in one of our projects. It was created, I believe, in VS 2003, and the project has been converted to VS2008. I can view the code fine, but when I try to load the designer view, VS stops responding and I have to close it with the task manager. I have tried leaving it running for several minutes, but it does not do anything. I ran "devenv /log" but didn't see anything unusual in the log. I can't find a specific error message anywhere. Any idea what the problem might be? Is there a lightweight editing mode I might be able to use or something? The reason I need to have a look at the visual representation of this control is to decide where to insert some new components. I've tried googling it and searching SO, but either I don't know what to search or there is nothing out there about this. Any help is appreciated. (The strangest thing is that the user control seems to load fine in another project which references, but VS crashes as soon as I even so much as click on it in that project.)

    Read the article

  • KERN-EXEC 3 when navigating within a text box (Symbian OS Browser Control)

    - by Andrew Flanagan
    I've had nothing but grief using Symbian's browser control on S60 3rd edition FP1. We currently display pages and many things are working smoothly. However, when inputting text into an HTML text field, the user will get a KERN-EXEC 3 if they move left at the beginning of the text input area (which should "wrap" it to the end) or if they move right at the end of the text input area (which should "wrap" it to the beginning). I can't seem to trap the input in OfferKeyEventL. I get the key event, I return EKeyWasConsumed and the cursor still moves. TKeyResponse CMyAppContainer::OfferKeyEventL(const TKeyEvent& aKeyEvent, TEventCode aType) { if (iBrCtlInterface) // My browser control { TBrCtlDefs::TBrCtlElementType type = iBrCtlInterface->FocusedElementType(); if (type == TBrCtlDefs::EElementActivatedInputBox || type == TBrCtlDefs::EElementInputBox) { if (aKeyEvent.iScanCode == EStdKeyLeftArrow || aKeyEvent.iScanCode == EStdKeyRightArrow) { return EKeyWasConsumed; } } } } I would be okay with completely disabling arrow key navigation but can't seem to do this. Any ideas? Am I going about this the wrong way? Has anyone here even worked with the Browser Control library (browserengine.lib) on S60 3.1?

    Read the article

  • Best source control system for maintaining different versions

    - by dalecooper
    Hi all! We need to be able to simultanously maintain a set of different versions of our system. I assume this is best done using branching. We currently use TFS2008 for source control, work items and automatic builds. What is the best version control solution for this task? Our organization is in the process of merging to TFS2010. Will TFS2010 give us the functionality we need to easily manage a series of branches per system version. We need to be able to keep each version isolated from the others, so that we can do testing deployment for each version. Our dev team consists of 5 .net developers and two flash developers. I have heard a lot of talk about GIT. Should we consider using GIT instead of TFS for source control? Is it possible to use TFS2010 together with GIT? Does anyone have similar setups that works nicely? Any sugggestions are appreciated! Thanks, Kjetil.

    Read the article

  • Gamepad Control for Processing + Android to Control Arduino Robot

    - by Iker
    I would like to create a Multitouch Gamepad control for Processing and use it to control a remote Arduino Robot. I would like to make the GUI on Processing and compile it for Android. Here is the GUI Gamepad for Processing I have created so far: float easing = 0.09; // start position int posX = 50; int posY = 200; // target position int targetX = 50; int targetY = 200; boolean dragging = false; void setup() { size(500,250); smooth(); } void draw() { background(255); if (!dragging) { // calculate the difference in position, apply easing and add to vx/vy float vx = (targetX - (posX)) * easing; float vy = (targetY - (posY)) * easing; // Add the velocity to the current position: make it move! posX += vx; posY += vy; } if(mousePressed) { dragging = true; posX = mouseX; posY = mouseY; } else { dragging = false; } DrawGamepad(); DrawButtons(); } void DrawGamepad() { //fill(0,155,155); //rect(0, 150, 100, 100, 15); ellipseMode(RADIUS); // Set ellipseMode to RADIUS fill(0,155,155); // Set fill to blue ellipse(50, 200, 50, 50); // Draw white ellipse using RADIUS mode ellipseMode(CENTER); // Set ellipseMode to CENTER fill(255); // Set fill to white// ellipse(posX, posY, 35, 35); // Draw gray ellipse using CENTER mode } void DrawButtons() { fill(0,155,155); // Set fill to blue ellipse(425, 225, 35, 35); ellipse(475, 225, 35, 35); fill(255,0,0); // Set fill to blue ellipse(425, 175, 35, 35); ellipse(475, 175, 35, 35); } I have realized that probably that code will not support Multitouch events on Android so I came up with another code found on this link Can Processing handle multi-touch? So the aim of this project is to create de multitouch gamepad to use to control my Arduino Robot. The gamepad should detect which key was pressed as well as the direction of the Joystick. Any help appreciated.

    Read the article

  • Stored Procedures In Source Control - Automate Build/Deployment Process

    - by Alex
    My company provides a large .NET service-oriented solution. The services layer interact with a T-SQL back-end consisting of hundreds of tables and stored procedures. Our C# code is in version-control (SVN) but our stored procedures and schema are not. After much lobbying of expedient upper-management, I was allowed to review our (non-existent) build/deployment process to accomplish the following goals: Place schema and stored procedures under source-control. Automate the build/deployment process. I would like to proceed per the accepted answer's strategy in this post but have additional questions: I would like to use Hudson as my build server. Is this a reasonable choice for a C#/SQL solution? What better alternatives should I explore? Assuming I have all triggers, stored-procedures, schema, etc... under source control, and that they are scripted to individual files, how do I generate a build script which will take into account dependencies/references between these items? (SQL Server does this automatically, but it generates one giant script) What does the workflow of performing an update at the client look like? i.e. I have to keep existing table data. How do I roll-back schema changes? I am the only programmer. Several other pseudo-technical staff like to make changes directly inside SQL Management Studio. Is it realistic to expect others to adhere to this solution -- how can I enforce this? Thank you in advance for your help.

    Read the article

  • Creating a dynamic proxy generator with c# – Part 2 – Interceptor Design

    - by SeanMcAlinden
    Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and caching mechanism For the latest code go to http://rapidioc.codeplex.com/ Before getting too involved in generating the proxy, I thought it would be worth while going through the intended design, this is important as the next step is to start creating the constructors for the proxy. Each proxy derives from a specified type The proxy has a corresponding constructor for each of the base type constructors The proxy has overrides for all methods and properties marked as Virtual on the base type For each overridden method, there is also a private method whose sole job is to call the base method. For each overridden method, a delegate is created whose sole job is to call the private method that calls the base method. The following class diagram shows the main classes and interfaces involved in the interception process. I’ll go through each of them to explain their place in the overall proxy.   IProxy Interface The proxy implements the IProxy interface for the sole purpose of adding custom interceptors. This allows the created proxy interface to be cast as an IProxy and then simply add Interceptors by calling it’s AddInterceptor method. This is done internally within the proxy building process so the consumer of the API doesn’t need knowledge of this. IInterceptor Interface The IInterceptor interface has one method: Handle. The handle method accepts a IMethodInvocation parameter which contains methods and data for handling method interception. Multiple classes that implement this interface can be added to the proxy. Each method override in the proxy calls the handle method rather than simply calling the base method. How the proxy fully works will be explained in the next section MethodInvocation. IMethodInvocation Interface & MethodInvocation class The MethodInvocation will contain one main method and multiple helper properties. Continue Method The method Continue() has two functions hidden away from the consumer. When Continue is called, if there are multiple Interceptors, the next Interceptors Handle method is called. If all Interceptors Handle methods have been called, the Continue method then calls the base class method. Properties The MethodInvocation will contain multiple helper properties including at least the following: Method Name (Read Only) Method Arguments (Read and Write) Method Argument Types (Read Only) Method Result (Read and Write) – this property remains null if the method return type is void Target Object (Read Only) Return Type (Read Only) DefaultInterceptor class The DefaultInterceptor class is a simple class that implements the IInterceptor interface. Here is the code: DefaultInterceptor namespace Rapid.DynamicProxy.Interception {     /// <summary>     /// Default interceptor for the proxy.     /// </summary>     /// <typeparam name="TBase">The base type.</typeparam>     public class DefaultInterceptor<TBase> : IInterceptor<TBase> where TBase : class     {         /// <summary>         /// Handles the specified method invocation.         /// </summary>         /// <param name="methodInvocation">The method invocation.</param>         public void Handle(IMethodInvocation<TBase> methodInvocation)         {             methodInvocation.Continue();         }     } } This is automatically created in the proxy and is the first interceptor that each method override calls. It’s sole function is to ensure that if no interceptors have been added, the base method is still called. Custom Interceptor Example A consumer of the Rapid.DynamicProxy API could create an interceptor for logging when the FirstName property of the User class is set. Just for illustration, I have also wrapped a transaction around the methodInvocation.Coninue() method. This means that any overriden methods within the user class will run within a transaction scope. MyInterceptor public class MyInterceptor : IInterceptor<User<int, IRepository>> {     public void Handle(IMethodInvocation<User<int, IRepository>> methodInvocation)     {         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name seting to: " + methodInvocation.Arguments[0]);         }         using (TransactionScope scope = new TransactionScope())         {             methodInvocation.Continue();         }         if (methodInvocation.Name == "set_FirstName")         {             Logger.Log("First name has been set to: " + methodInvocation.Arguments[0]);         }     } } Overridden Method Example To show a taster of what the overridden methods on the proxy would look like, the setter method for the property FirstName used in the above example would look something similar to the following (this is not real code but will look similar): set_FirstName public override void set_FirstName(string value) {     set_FirstNameBaseMethodDelegate callBase =         new set_FirstNameBaseMethodDelegate(this.set_FirstNameProxyGetBaseMethod);     object[] arguments = new object[] { value };     IMethodInvocation<User<IRepository>> methodInvocation =         new MethodInvocation<User<IRepository>>(this, callBase, "set_FirstName", arguments, interceptors);          this.Interceptors[0].Handle(methodInvocation); } As you can see, a delegate instance is created which calls to a private method on the class, the private method calls the base method and would look like the following: calls base setter private void set_FirstNameProxyGetBaseMethod(string value) {     base.set_FirstName(value); } The delegate is invoked when methodInvocation.Continue() is called within an interceptor. The set_FirstName parameters are loaded into an object array. The current instance, delegate, method name and method arguments are passed into the methodInvocation constructor (there will be more data not illustrated here passed in when created including method info, return types, argument types etc.) The DefaultInterceptor’s Handle method is called with the methodInvocation instance as it’s parameter. Obviously methods can have return values, ref and out parameters etc. in these cases the generated method override body will be slightly different from above. I’ll go into more detail on these aspects as we build them. Conclusion I hope this has been useful, I can’t guarantee that the proxy will look exactly like the above, but at the moment, this is pretty much what I intend to do. Always worth downloading the code at http://rapidioc.codeplex.com/ to see the latest. There will also be some tests that you can debug through to help see what’s going on. Cheers, Sean.

    Read the article

  • Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and cach

    - by SeanMcAlinden
    I’ve recently started a project with a few mates to learn the ins and outs of Dependency Injection, AOP and a number of other pretty crucial patterns of development as we’ve all been using these patterns for a while but have relied totally on third part solutions to do the magic. We thought it would be interesting to really get into the details by rolling our own IoC container and hopefully learn a lot on the way, and you never know, we might even create an excellent framework. The open source project is called Rapid IoC and is hosted at http://rapidioc.codeplex.com/ One of the most interesting tasks for me is creating the dynamic proxy generator for enabling Aspect Orientated Programming (AOP). In this series of articles, I’m going to track each step I take for creating the dynamic proxy generator and I’ll try my best to explain what everything means - mainly as I’ll be using Reflection.Emit to emit a fair amount of intermediate language code (IL) to create the proxy types at runtime which can be a little taxing to read. It’s worth noting that building the proxy is without a doubt going to be slightly painful so I imagine there will be plenty of areas I’ll need to change along the way. Anyway lets get started…   Part 1 - Creating the Assembly builder, Module builder and caching mechanism Part 1 is going to be a really nice simple start, I’m just going to start by creating the assembly, module and type caches. The reason we need to create caches for the assembly, module and types is simply to save the overhead of recreating proxy types that have already been generated, this will be one of the important steps to ensure that the framework is fast… kind of important as we’re calling the IoC container ‘Rapid’ – will be a little bit embarrassing if we manage to create the slowest framework. The Assembly builder The assembly builder is what is used to create an assembly at runtime, we’re going to have two overloads, one will be for the actual use of the proxy generator, the other will be mainly for testing purposes as it will also save the assembly so we can use Reflector to examine the code that has been created. Here’s the code: DynamicAssemblyBuilder using System; using System.Reflection; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Class for creating an assembly builder.     /// </summary>     internal static class DynamicAssemblyBuilder     {         #region Create           /// <summary>         /// Creates an assembly builder.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         public static AssemblyBuilder Create(string assemblyName)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.Run);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           /// <summary>         /// Creates an assembly builder and saves the assembly to the passed in location.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         /// <param name="filePath">The file path.</param>         public static AssemblyBuilder Create(string assemblyName, string filePath)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.RunAndSave, filePath);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           #endregion     } }   So hopefully the above class is fairly explanatory, an AssemblyName is created using the passed in string for the actual name of the assembly. An AssemblyBuilder is then constructed with the current AppDomain and depending on the overload used, it is either just run in the current context or it is set up ready for saving. It is then added to the cache.   DynamicAssemblyCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions;   namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Cache for storing the dynamic assembly builder.     /// </summary>     internal static class DynamicAssemblyCache     {         #region Declarations           private static object syncRoot = new object();         internal static AssemblyBuilder Cache = null;           #endregion           #region Adds a dynamic assembly to the cache.           /// <summary>         /// Adds a dynamic assembly builder to the cache.         /// </summary>         /// <param name="assemblyBuilder">The assembly builder.</param>         public static void Add(AssemblyBuilder assemblyBuilder)         {             lock (syncRoot)             {                 Cache = assemblyBuilder;             }         }           #endregion           #region Gets the cached assembly                  /// <summary>         /// Gets the cached assembly builder.         /// </summary>         /// <returns></returns>         public static AssemblyBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoAssemblyInCache);             }         }           #endregion     } } The cache is simply a static property that will store the AssemblyBuilder (I know it’s a little weird that I’ve made it public, this is for testing purposes, I know that’s a bad excuse but hey…) There are two methods for using the cache – Add and Get, these just provide thread safe access to the cache.   The Module Builder The module builder is required as the create proxy classes will need to live inside a module within the assembly. Here’s the code: DynamicModuleBuilder using System.Reflection.Emit; using Rapid.DynamicProxy.Assembly; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for creating a module builder.     /// </summary>     internal static class DynamicModuleBuilder     {         /// <summary>         /// Creates a module builder using the cached assembly.         /// </summary>         public static ModuleBuilder Create()         {             string assemblyName = DynamicAssemblyCache.Get.GetName().Name;               ModuleBuilder moduleBuilder = DynamicAssemblyCache.Get.DefineDynamicModule                 (assemblyName, string.Format("{0}.dll", assemblyName));               DynamicModuleCache.Add(moduleBuilder);               return moduleBuilder;         }     } } As you can see, the module builder is created on the assembly that lives in the DynamicAssemblyCache, the module is given the assembly name and also a string representing the filename if the assembly is to be saved. It is then added to the DynamicModuleCache. DynamicModuleCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for storing the module builder.     /// </summary>     internal static class DynamicModuleCache     {         #region Declarations           private static object syncRoot = new object();         internal static ModuleBuilder Cache = null;           #endregion           #region Add           /// <summary>         /// Adds a dynamic module builder to the cache.         /// </summary>         /// <param name="moduleBuilder">The module builder.</param>         public static void Add(ModuleBuilder moduleBuilder)         {             lock (syncRoot)             {                 Cache = moduleBuilder;             }         }           #endregion           #region Get           /// <summary>         /// Gets the cached module builder.         /// </summary>         /// <returns></returns>         public static ModuleBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoModuleInCache);             }         }           #endregion     } }   The DynamicModuleCache is very similar to the assembly cache, it is simply a statically stored module with thread safe Add and Get methods.   The DynamicTypeCache To end off this post, I’m going to create the cache for storing the generated proxy classes. I’ve spent a fair amount of time thinking about the type of collection I should use to store the types and have finally decided that for the time being I’m going to use a generic dictionary. This may change when I can actually performance test the proxy generator but the time being I think it makes good sense in theory, mainly as it pretty much maintains it’s performance with varying numbers of items – almost constant (0)1. Plus I won’t ever need to loop through the items which is not the dictionaries strong point. Here’s the code as it currently stands: DynamicTypeCache using System; using System.Collections.Generic; using System.Security.Cryptography; using System.Text; namespace Rapid.DynamicProxy.Types {     /// <summary>     /// Cache for storing proxy types.     /// </summary>     internal static class DynamicTypeCache     {         #region Declarations           static object syncRoot = new object();         public static Dictionary<string, Type> Cache = new Dictionary<string, Type>();           #endregion           /// <summary>         /// Adds a proxy to the type cache.         /// </summary>         /// <param name="type">The type.</param>         /// <param name="proxy">The proxy.</param>         public static void AddProxyForType(Type type, Type proxy)         {             lock (syncRoot)             {                 Cache.Add(GetHashCode(type.AssemblyQualifiedName), proxy);             }         }           /// <summary>         /// Tries the type of the get proxy for.         /// </summary>         /// <param name="type">The type.</param>         /// <returns></returns>         public static Type TryGetProxyForType(Type type)         {             lock (syncRoot)             {                 Type proxyType;                 Cache.TryGetValue(GetHashCode(type.AssemblyQualifiedName), out proxyType);                 return proxyType;             }         }           #region Private Methods           private static string GetHashCode(string fullName)         {             SHA1CryptoServiceProvider provider = new SHA1CryptoServiceProvider();             Byte[] buffer = Encoding.UTF8.GetBytes(fullName);             Byte[] hash = provider.ComputeHash(buffer, 0, buffer.Length);             return Convert.ToBase64String(hash);         }           #endregion     } } As you can see, there are two public methods, one for adding to the cache and one for getting from the cache. Hopefully they should be clear enough, the Get is a TryGet as I do not want the dictionary to throw an exception if a proxy doesn’t exist within the cache. Other than that I’ve decided to create a key using the SHA1CryptoServiceProvider, this may change but my initial though is the SHA1 algorithm is pretty fast to put together using the provider and it is also very unlikely to have any hashing collisions. (there are some maths behind how unlikely this is – here’s the wiki if you’re interested http://en.wikipedia.org/wiki/SHA_hash_functions)   Anyway, that’s the end of part 1 – although I haven’t started any of the fun stuff (by fun I mean hairpulling, teeth grating Relfection.Emit style fun), I’ve got the basis of the DynamicProxy in place so all we have to worry about now is creating the types, interceptor classes, method invocation information classes and finally a really nice fluent interface that will abstract all of the hard-core craziness away and leave us with a lightning fast, easy to use AOP framework. Hope you find the series interesting. All of the source code can be viewed and/or downloaded at our codeplex site - http://rapidioc.codeplex.com/ Kind Regards, Sean.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >