Search Results

Search found 257 results on 11 pages for 'idisposable'.

Page 7/11 | < Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >

  • Is there a better way to avoid an infinite loop using winforms?

    - by Hamish Grubijan
    I am using .Net 3.5 for now. Right now I am using a using trick to disable and enable events around certain sections of code. The user can change either days, hours, minutes or total minutes, and that should not cause an infinite cascade of events (e.g. minutes changing total, total changing minutes, etc.) While the code does what I want, there might be a better / more straight-forward way. Do you know of any? For brawny points: This control will be used by multiple teams - I do not want to make it embarrassing. I suspect that I do not need to reinvent the wheel when defining hours in a day, days in week, etc. Some other standard .Net library out there must have it. Any other remarks regarding code? This using (EventHacker.DisableEvents(this)) business - that must be a common pattern in .Net ... changing the setting temporarily. What is the name of it? I'd like to be able to refer to it in a comment and also read up more on current implementations. In the general case not only a handle to the thing being changed needs to be remembered, but also the previous state (in this case previous state does not matter - events are turned on and off unconditionally). Then there is also a possibility of multi-threaded hacking. One could also utilize generics to make the code arguably cleaner. Figuring all this out can lead to a multi-page blog post. I'd be happy to hear some of the answers. P.S. Does it seem like I suffer from obsessive compulsive disorder? Some people like to get things finished and move on; I like to keep them open ... there is always a better way. // Corresponding Designer class is omitted. using System; using System.Windows.Forms; namespace XYZ // Real name masked { interface IEventHackable { void EnableEvents(); void DisableEvents(); } public partial class PollingIntervalGroupBox : GroupBox, IEventHackable { private const int DAYS_IN_WEEK = 7; private const int MINUTES_IN_HOUR = 60; private const int HOURS_IN_DAY = 24; private const int MINUTES_IN_DAY = MINUTES_IN_HOUR * HOURS_IN_DAY; private const int MAX_TOTAL_DAYS = 100; private static readonly decimal MIN_TOTAL_NUM_MINUTES = 1; // Anything faster than once per minute can bog down our servers. private static readonly decimal MAX_TOTAL_NUM_MINUTES = (MAX_TOTAL_DAYS * MINUTES_IN_DAY) - 1; // 99 days should be plenty. // The value above was chosen so to not cause an overflow exception. // Watch out for it - numericUpDownControls each have a MaximumValue setting. public PollingIntervalGroupBox() { InitializeComponent(); InitializeComponentCustom(); } private void InitializeComponentCustom() { this.m_upDownDays.Maximum = MAX_TOTAL_DAYS - 1; this.m_upDownHours.Maximum = HOURS_IN_DAY - 1; this.m_upDownMinutes.Maximum = MINUTES_IN_HOUR - 1; this.m_upDownTotalMinutes.Maximum = MAX_TOTAL_NUM_MINUTES; this.m_upDownTotalMinutes.Minimum = MIN_TOTAL_NUM_MINUTES; } private void m_upDownTotalMinutes_ValueChanged(object sender, EventArgs e) { setTotalMinutes(this.m_upDownTotalMinutes.Value); } private void m_upDownDays_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void m_upDownHours_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void m_upDownMinutes_ValueChanged(object sender, EventArgs e) { updateTotalMinutes(); } private void updateTotalMinutes() { this.setTotalMinutes( MINUTES_IN_DAY * m_upDownDays.Value + MINUTES_IN_HOUR * m_upDownHours.Value + m_upDownMinutes.Value); } public decimal TotalMinutes { get { return m_upDownTotalMinutes.Value; } set { m_upDownTotalMinutes.Value = value; } } public decimal TotalHours { set { setTotalMinutes(value * MINUTES_IN_HOUR); } } public decimal TotalDays { set { setTotalMinutes(value * MINUTES_IN_DAY); } } public decimal TotalWeeks { set { setTotalMinutes(value * DAYS_IN_WEEK * MINUTES_IN_DAY); } } private void setTotalMinutes(decimal nTotalMinutes) { if (nTotalMinutes < MIN_TOTAL_NUM_MINUTES) { setTotalMinutes(MIN_TOTAL_NUM_MINUTES); return; // Must be carefull with recursion. } if (nTotalMinutes > MAX_TOTAL_NUM_MINUTES) { setTotalMinutes(MAX_TOTAL_NUM_MINUTES); return; // Must be carefull with recursion. } using (EventHacker.DisableEvents(this)) { // First set the total minutes this.m_upDownTotalMinutes.Value = nTotalMinutes; // Then set the rest this.m_upDownDays.Value = (int)(nTotalMinutes / MINUTES_IN_DAY); nTotalMinutes = nTotalMinutes % MINUTES_IN_DAY; // variable reuse. this.m_upDownHours.Value = (int)(nTotalMinutes / MINUTES_IN_HOUR); nTotalMinutes = nTotalMinutes % MINUTES_IN_HOUR; this.m_upDownMinutes.Value = nTotalMinutes; } } // Event magic public void EnableEvents() { this.m_upDownTotalMinutes.ValueChanged += this.m_upDownTotalMinutes_ValueChanged; this.m_upDownDays.ValueChanged += this.m_upDownDays_ValueChanged; this.m_upDownHours.ValueChanged += this.m_upDownHours_ValueChanged; this.m_upDownMinutes.ValueChanged += this.m_upDownMinutes_ValueChanged; } public void DisableEvents() { this.m_upDownTotalMinutes.ValueChanged -= this.m_upDownTotalMinutes_ValueChanged; this.m_upDownDays.ValueChanged -= this.m_upDownDays_ValueChanged; this.m_upDownHours.ValueChanged -= this.m_upDownHours_ValueChanged; this.m_upDownMinutes.ValueChanged -= this.m_upDownMinutes_ValueChanged; } // We give as little info as possible to the 'hacker'. private sealed class EventHacker : IDisposable { IEventHackable _hackableHandle; public static IDisposable DisableEvents(IEventHackable hackableHandle) { return new EventHacker(hackableHandle); } public EventHacker(IEventHackable hackableHandle) { this._hackableHandle = hackableHandle; this._hackableHandle.DisableEvents(); } public void Dispose() { this._hackableHandle.EnableEvents(); } } } }

    Read the article

  • Using a c# .net object in an Excel VBA form

    - by Mark O'G
    Hi I have a .net object that I want to use in Excel. I have an existing VBA script that i need to alter to call this the object from. I have then converted the object to a TLB. I've not really touched on this area before so any help will be appreciated. I have created an interface [Guid("0F700B48-E0CA-446b-B87E-555BCC317D74"),InterfaceType(ComInterfaceType.InterfaceIsDual)] public interface IOfficeCOMInterface { [DispId(1)] void ResetOrder(); [DispId(2)] void SetDeliveryAddress(string PostalName, string AddressLine1, string AddressLine2, string AddressLine3, string AddressLine4, string PostCode, string CountryCode, string TelephoneNo, string FaxNo, string EmailAddress); } I have also created an class that inherits that object. [ClassInterface(ClassInterfaceType.None), ProgId("NAMESPACE.OfficeCOMInterface"), Guid("9D9723F9-8CF1-4834-BE69-C3FEAAAAB530"), ComVisible(true)] public class OfficeCOMInterface : IOfficeCOMInterface, IDisposable { public void ResetSOPOrder() { } public void SetDeliveryAddress(string PostalName, string AddressLine1, string AddressLine2, string AddressLine3, string AddressLine4, string PostCode, string CountryCode, string TelephoneNo, string FaxNo, string EmailAddress) { try { SalesOrder.AmendDeliveryAddress(PostalName, AddressLine1, AddressLine2, AddressLine3, AddressLine4, PostCode); MessageBox.Show("Delivery address set"); } catch (Exception ex) { throw ex; } } }

    Read the article

  • Any sense to set obj = null(Nothing) in Dispose()?

    - by serhio
    Is there any sense to set custom object to null(Nothing in VB.NET) in the Dispose() method? Could this prevent memory leaks or it's useless?! Let's consider two examples: public class Foo : IDisposable { private Bar bar; // standard custom .NET object public Foo(Bar bar) { this.bar = bar; } public void Dispose() { bar = null; // any sense? } } public class Foo : RichTextBox { // this could be also: GDI+, TCP socket, SQl Connection, other "heavy" object private Bitmap backImage; public Foo(Bitmap backImage) { this.backImage = backImage; } protected override void Dispose(bool disposing) { if (disposing) { backImage = null; // any sense? } } }

    Read the article

  • How to mock WCF Web Services with Rhino Mocks.

    - by Will
    How do I test a class that utilizes proxy clients generated by a Web Service Reference? I would like to mock the client, but the generated client interface doesn't contain the close method, which is required to properly terminate the proxy. If I don't use the interface, but instead a concrete reference, I get access to the close method but loose the ability to mock the proxy. I'm trying to test a class similar to this: public class ServiceAdapter : IServiceAdapter, IDisposable { // ILoggingServiceClient is generated via a Web Service reference private readonly ILoggingServiceClient _loggingServiceClient; public ServiceAdapter() : this(new LoggingServiceClient()) {} internal ServiceAdapter(ILoggingServiceClient loggingServiceClient) { _loggingServiceClient = loggingServiceClient; } public void LogSomething(string msg) { _loggingServiceClient.LogSomething(msg); } public void Dispose() { // this doesn't compile, because ILoggingServiceClient doesn't contain Close(), // yet Close is required to properly terminate the WCF client _loggingServiceClient.Close(); } }

    Read the article

  • Enforcing an "end" call whenever there is a corresponding "start" call

    - by Jeff Meatball Yang
    Let's say I want to enforce a rule: Everytime you call "StartJumping()" in your function, you must call "EndJumping()" before you return. When a developer is writing their code, they may simply forget to call EndSomething - so I want to make it easy to remember. I can think of only one way to do this: and it abuses the "using" keyword: class Jumper : IDisposable { public Jumper() { Jumper.StartJumping(); } public void Dispose() { Jumper.EndJumping(); } public static void StartJumping() {...} public static void EndJumping() {...} } public bool SomeFunction() { // do some stuff // start jumping... using(Jumper j = new Jumper()) { // do more stuff // while jumping } // end jumping } Is there a better way to do this?

    Read the article

  • Is it necessarily bad style to ignore the return value of a method

    - by Jono
    Let's say I have a C# method public void CheckXYZ(int xyz) { // do some operation with side effects } Elsewhere in the same class is another method public int GetCheckedXYZ(int xyz) { int abc; // functionally equivalent operation to CheckXYZ, // with additional side effect of assigning a value to abc return abc; // this value is calculated during the check above } Is it necessarily bad style to refactor this by removing the CheckXYZ method, and replacing all existing CheckXYZ() calls with GetCheckedXYZ(), ignoring the return value? The returned type isn't IDisposable in this case. Does it come down to discretion?

    Read the article

  • Collectable<T> serialization, Root Namespaces on T in .xml files.

    - by Stacey
    I have a Repository Class with the following method... public T Single<T>(Predicate<T> expression) { using (var list = (Models.Collectable<T>)System.Xml.Serializer.Deserialize(typeof(Models.Collectable<T>), FileName)) { return list.Find(expression); } } Where Collectable is defined.. [Serializable] public class Collectable<T> : List<T>, IDisposable { public Collectable() { } public void Dispose() { } } And an Item that uses it is defined.. [Serializable] [System.Xml.Serialization.XmlRoot("Titles")] public partial class Titles : Collectable<Title> { } The problem is when I call the method, it expects "Collectable" to be the XmlRoot, but the XmlRoot is "Titles" (all of object Title). I have several classes that are collected in .xml files like this, but it seems pointless to rewrite the basic methods for loading each up when the generic accessors do it - but how can I enforce the proper root name for each file without hard coding methods for each one? The [System.Xml.Serialization.XmlRoot] seems to be ignored.

    Read the article

  • "Reloading" Bindings in Ninject2?

    - by Michael Stum
    I'm using Ninject2 for DI and I have a Module that loads data from a config file. I wonder if there is a way to tell the Kernel or the Module to reload the config? (I can trigger that through code if needed) What worries me is the lifetime of existing objects. Say I have ITest bound to TestImpl1 in Singleton Scope and I change the config to bind ITest to TestImpl2 instead. All new requests should get TestImpl2, but the classes that already requested TestImpl1 before obviously keep it. However, what if all users of TestImpl1 are gone - will TestImpl1 be properly garbage collected and disposed in case it implements IDisposable? Or will it just be orphaned? Do I have to loop through each type and call Unbind/Bind on it? Or can I just unload the entire Module and reload it while still managing any existing object?

    Read the article

  • A problem with .NET 2.0 project, using a 3.0 DLL which implements WCF services.

    - by avance70
    I made a client for accessing my WCF services in one project, and all classes that work with services inherit from this class: public abstract class ServiceClient<TServiceClient> : IDisposable where TServiceClient : ICommunicationObject This class is where I do stuff like disposing, logging when the client was called, etc. some common stuff which all service classes would normally do. Everything worked fine, until I got the task to implement this on an old system. I got into a problem when I used this project (DLL) in an other project which cannot reference System.ServiceModel (since it's an old .NET 2.0 software that I still maintain, and upgrading it to 3.0 is out of the question). Here, if I omit where TServiceClient : ICommunicationObject then the project can build, but the ServiceClient cannot use, for example, client.Close() or client.State So, is my only solution to drop the where statement, and rewrite the service classes?

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 1

    - by shiju
     A while back, I have blogged NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2 on how to use MongoDB with an ASP.NET MVC application. The NoSQL movement is getting big attention and RavenDB is the latest addition to the NoSQL and document database world. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed  by Ayende Rahien.  Raven stores schema-less JSON documents, allow you to define indexes using Linq queries and focus on low latency and high performance. RavenDB is .NET focused document database which comes with a fully functional .NET client API  and supports LINQ. RavenDB comes with two components, a server and a client API. RavenDB is a REST based system, so you can write your own HTTP cleint API. As a .NET developer, RavenDB is becoming my favorite document database. Unlike other document databases, RavenDB is supports transactions using System.Transactions. Also it's supports both embedded and server mode of database. You can access RavenDB site at http://ravendb.netA demo App with ASP.NET MVCLet's create a simple demo app with RavenDB and ASP.NET MVC. To work with RavenDB, do the following steps. Go to http://ravendb.net/download and download the latest build.Unzip the downloaded file.Go to the /Server directory and run the RavenDB.exe. This will start the RavenDB server listening on localhost:8080You can change the port of RavenDB  by modifying the "Raven/Port" appSetting value in the RavenDB.exe.config file.When running the RavenDB, it will automatically create a database in the /Data directory. You can change the directory name data by modifying "Raven/DataDirt" appSetting value in the RavenDB.exe.config file.RavenDB provides a browser based admin tool. When the Raven server is running, You can be access the browser based admin tool and view and edit documents and index using your browser admin tool. The web admin tool available at http://localhost:8080The below is the some screen shots of web admin tool     Working with ASP.NET MVC  To working with RavenDB in our demo ASP.NET MVC application, do the following steps Step 1 - Add reference to Raven Cleint API In our ASP.NET MVC application, Add a reference to the Raven.Client.Lightweight.dll from the Client directory. Step 2 - Create DocumentStoreThe document store would be created once per application. Let's create a DocumentStore on application start-up in the Global.asax.cs. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); The above code will create a Raven DB document store and will be listening the server locahost at port 8080    Step 3 - Create DocumentSession on BeginRequest   Let's create a DocumentSession on BeginRequest event in the Global.asax.cs. We are using the document session for every unit of work. In our demo app, every HTTP request would be a single Unit of Work (UoW). BeginRequest += (sender, args) =>   HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); Step 4 - Destroy the DocumentSession on EndRequest  EndRequest += (o, eventArgs) => {     var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable;     if (disposable != null)         disposable.Dispose(); };  At the end of HTTP request, we are destroying the DocumentSession  object.The below  code block shown all the code in the Global.asax.cs  private const string RavenSessionKey = "RavenMVC.Session"; private static DocumentStore documentStore;   protected void Application_Start() { //Create a DocumentStore in Application_Start //DocumentStore should be created once per application and stored as a singleton. documentStore = new DocumentStore { Url = "http://localhost:8080/" }; documentStore.Initialise(); AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); //DI using Unity 2.0 ConfigureUnity(); }   public MvcApplication() { //Create a DocumentSession on BeginRequest   //create a document session for every unit of work BeginRequest += (sender, args) =>     HttpContext.Current.Items[RavenSessionKey] = documentStore.OpenSession(); //Destroy the DocumentSession on EndRequest EndRequest += (o, eventArgs) => { var disposable = HttpContext.Current.Items[RavenSessionKey] as IDisposable; if (disposable != null) disposable.Dispose(); }; }   //Getting the current DocumentSession public static IDocumentSession CurrentSession {   get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  We have setup all necessary code in the Global.asax.cs for working with RavenDB. For our demo app, Let’s write a domain class  public class Category {       public string Id { get; set; }       [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }   } We have created simple domain entity Category. Let's create repository class for performing CRUD operations against our domain entity Category.  public interface ICategoryRepository {     Category Load(string id);     IEnumerable<Category> GetCategories();     void Save(Category category);     void Delete(string id);       }    public class CategoryRepository : ICategoryRepository {     private IDocumentSession session;     public CategoryRepository()     {             session = MvcApplication.CurrentSession;     }     //Load category based on Id     public Category Load(string id)     {         return session.Load<Category>(id);     }     //Get all categories     public IEnumerable<Category> GetCategories()     {         var categories= session.LuceneQuery<Category>()                 .WaitForNonStaleResults()             .ToArray();         return categories;       }     //Insert/Update category     public void Save(Category category)     {         if (string.IsNullOrEmpty(category.Id))         {             //insert new record             session.Store(category);         }         else         {             //edit record             var categoryToEdit = Load(category.Id);             categoryToEdit.Name = category.Name;             categoryToEdit.Description = category.Description;         }         //save the document session         session.SaveChanges();     }     //delete a category     public void Delete(string id)     {         var category = Load(id);         session.Delete<Category>(category);         session.SaveChanges();     }        } For every CRUD operations, we are taking the current document session object from HttpContext object. session = MvcApplication.CurrentSession; We are calling the static method CurrentSession from the Global.asax.cs public static IDocumentSession CurrentSession {     get { return (IDocumentSession)HttpContext.Current.Items[RavenSessionKey]; } }  Retrieve Entities  The Load method get the single Category object based on the Id. RavenDB is working based on the REST principles and the Id would be like categories/1. The Id would be created by automatically when a new object is inserted to the document store. The REST uri categories/1 represents a single category object with Id representation of 1.   public Category Load(string id) {    return session.Load<Category>(id); } The GetCategories method returns all the categories calling the session.LuceneQuery method. RavenDB is using a lucen query syntax for querying. I will explain more details about querying and indexing in my future posts.   public IEnumerable<Category> GetCategories() {     var categories= session.LuceneQuery<Category>()             .WaitForNonStaleResults()         .ToArray();     return categories;   } Insert/Update entityFor insert/Update a Category entity, we have created Save method in repository class. If  the Id property of Category is null, we call Store method of Documentsession for insert a new record. For editing a existing record, we load the Category object and assign the values to the loaded Category object. The session.SaveChanges() will save the changes to document store.  //Insert/Update category public void Save(Category category) {     if (string.IsNullOrEmpty(category.Id))     {         //insert new record         session.Store(category);     }     else     {         //edit record         var categoryToEdit = Load(category.Id);         categoryToEdit.Name = category.Name;         categoryToEdit.Description = category.Description;     }     //save the document session     session.SaveChanges(); }  Delete Entity  In the Delete method, we call the document session's delete method and call the SaveChanges method to reflect changes in the document store.  public void Delete(string id) {     var category = Load(id);     session.Delete<Category>(category);     session.SaveChanges(); }  Let’s create ASP.NET MVC controller and controller actions for handling CRUD operations for the domain class Category  public class CategoryController : Controller { private ICategoryRepository categoyRepository; //DI enabled constructor public CategoryController(ICategoryRepository categoyRepository) {     this.categoyRepository = categoyRepository; } public ActionResult Index() {         var categories = categoyRepository.GetCategories();     if (categories == null)         return RedirectToAction("Create");     return View(categories); }   [HttpGet] public ActionResult Edit(string id) {     var category = categoyRepository.Load(id);         return View("Save",category); } // GET: /Category/Create [HttpGet] public ActionResult Create() {     var category = new Category();     return View("Save", category); } [HttpPost] public ActionResult Save(Category category) {     if (!ModelState.IsValid)     {         return View("Save", category);     }           categoyRepository.Save(category);         return RedirectToAction("Index");     }        [HttpPost] public ActionResult Delete(string id) {     categoyRepository.Delete(id);     var categories = categoyRepository.GetCategories();     return PartialView("CategoryList", categories);      }        }  RavenDB is an awesome document database and I hope that it will be the winner in .NET space of document database world.  The source code of demo application available at http://ravenmvc.codeplex.com/

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • The embarrassingly obvious about SQL Server CE

    - by Edward Boyle
    I have been working with SQL servers in one form or another for almost two decades now. But I am new to SQL Server Compact Edition. In the past weeks I have been working with SQL Serve CE a lot. The SQL, not a problem, but the engine itself is very new to me. One of the issues I ran into was a simple SQL statement taking excusive amounts of time; by excessive, I mean over one second. I wrote a little code to time the method. Sometimes it took under one second, other times as long as three seconds. –But it was a simple update statement! As embarrassing as it is, why it was slow eluded me. I posted my issue to MSDN and I got a reply from ErikEJ (MS MVP) who runs the blog “Everything SQL Server Compact” . I know little to nothing about SQL Server Compact. This guy is completely obsessed very well versed in CE. If you spend any time in MSDN forums, it seems that this guy single handedly has the answer for every CE question that comes up. Anyway, he said: “Opening a connection to a SQL Server Compact database file is a costly operation, keep one connection open per thread (incl. your UI thread) in your app, the one on the UI thread should live for the duration of your app.” It hit me, all databases have some connection overhead and SQL Server CE is not a database engine running as a service drinking Jolt Cola waiting for someone to talk to him so he can spring into action and show off his quarter-mile sprint capabilities. Imagine if you had to start the SQL Server process every time you needed to make a database connection. Principally, that is what you are doing with SQL Server CE. For someone who has worked with Enterprise Level SQL Servers a lot, I had to come to the mental image that my Open connection to SQL Server CE is basically starting a service, my own private service, and by closing the connection, I am shutting down my little private service. After making the changes in my code, I lost any reservations I had with using CE. At present, my Data Access Layer class has a constructor; in that constructor I open my connection, I also have OpenConnection and CloseConnection methods, I also implemented IDisposable and clean up any connections in Dispose(). I am still finalizing how this assembly will function. – That’s beside the point. All I’m trying to say is: “Opening a connection to a SQL Server Compact database file is a costly operation”

    Read the article

  • Caching no .NET Framework 4.0

    - by anobre
    Olá pessoal, como estão? Hoje vou apresentar uma mudança interessante sobre caching, em comparação com versões anteriores. Introdução A versão 4.0 da plataforma .NET trouxe uma mudança estrutural esperada para os recursos de Cache. Nas versão 3.5 (até SP1), a plataforma fornecia uma implementação do Cache através do namespace System.Web.Caching. Nas versões anteriores o cache estava disponível no namespace System.Web, o que criada uma dependência com as classes do ASP.NET. Neste novo framework, o namespace System.Runtime.Caching reúne toda a API necessária para criar todas as tarefas comuns ao ASP.NET Caching de versões anteriores. System.Runtime.Caching e MemoryCache Tudo que precisamos para trabalhar com cache, em aplicações Web ou não, está reunido no namespace System.Runtime.Caching. A unidade básica de trabalho é a classe abstrata ObjectCache, que fornece a base para criar implementações customizadas de cache. E como é de se esperar, a classe MemoryCache é a implementação da classe abstrata ObjectCache para armazenamento das informações em memória. public class MemoryCache : ObjectCache, IEnumerable, IDisposable A utilização do cache é muito simples, bem parecida com o modelo anterior: ObjectCache cache = MemoryCache.Default; string fileContents = cache["filecontents"] as string; if (fileContents == null) { CacheItemPolicy policy = new CacheItemPolicy(); List<string> filePaths = new List<string>(); filePaths.Add("c:\\cache\\example.txt"); policy.ChangeMonitors.Add(new HostFileChangeMonitor(filePaths)); // Fetch the file contents. fileContents = File.ReadAllText("c:\\cache\\example.txt"); cache.Set("filecontents", fileContents, policy); } Label1.Text = fileContents; Extendendo o Cache É possível customizar todo mecanismo de cache através de várias abordagens. ScottGu escreveu sobre isto, que você pode acessar através deste link. Conclusão Algo muito esperado em versões anteriores, finalmente o cache está disponível sem criar relacionamento com assemblies exclusivamente Web. Perfeito para quem desenvolve outros tipos de aplicação, usufruindo deste recurso sem carregar código desnecessário. Abraços!

    Read the article

  • Why can't Java/C# implement RAII?

    - by mike30
    Question: Why can't Java/C# implement RAII? Clarification: I am aware the garbage collector is not deterministic. So with the current language features it is not possible for an object's Dispose() method to be called automatically on scope exit. But could such a deterministic feature be added? My understanding: I feel an implementation of RAII must satisfy two requirements: 1. The lifetime of a resource must be bound to a scope. 2. Implicit. The freeing of the resource must happen without an explicit statement by the programmer. Analogous to a garbage collector freeing memory without an explicit statement. The "implicitness" only needs to occur at point of use of the class. The class library creator must of course explicitly implement a destructor or Dispose() method. Java/C# satisfy point 1. In C# a resource implementing IDisposable can be bound to a "using" scope: void test() { using(Resource r = new Resource()) { r.foo(); }//resource released on scope exit } This does not satisfy point 2. The programmer must explicitly tie the object to a special "using" scope. Programmers can (and do) forget to explicitly tie the resource to a scope, creating a leak. In fact the "using" blocks are converted to try-finally-dispose() code by the compiler. It has the same explicit nature of the try-finally-dispose() pattern. Without an implicit release, the hook to a scope is syntactic sugar. void test() { //Programmer forgot (or was not aware of the need) to explicitly //bind Resource to a scope. Resource r = new Resource(); r.foo(); }//resource leaked!!! I think it is worth creating a language feature in Java/C# allowing special objects that are hooked to the stack via a smart-pointer. The feature would allow you to flag a class as scope-bound, so that it always is created with a hook to the stack. There could be a options for different for different types of smart pointers. class Resource - ScopeBound { /* class details */ void Dispose() { //free resource } } void test() { //class Resource was flagged as ScopeBound so the tie to the stack is implicit. Resource r = new Resource(); //r is a smart-pointer r.foo(); }//resource released on scope exit. I think implicitness is "worth it". Just as the implicitness of garbage collection is "worth it". Explicit using blocks are refreshing on the eyes, but offer no semantic advantage over try-finally-dispose(). Is it impractical to implement such a feature into the Java/C# languages? Could it be introduced without breaking old code?

    Read the article

  • Building a better mouse-trap &ndash; Improving the creation of XML Message Requests using Reflection, XML &amp; XSLT

    - by paulschapman
    Introduction The way I previously created messages to send to the GovTalk service I used the XMLDocument to create the request. While this worked it left a number of problems; not least that for every message a special function would need to created. This is OK for the short term but the biggest cost in any software project is maintenance and this would be a headache to maintain. So the following is a somewhat better way of achieving the same thing. For the purposes of this article I am going to be using the CompanyNumberSearch request of the GovTalk service – although this technique would work for any service that accepted XML. The C# functions which send and receive the messages remain the same. The magic sauce in this is the XSLT which defines the structure of the request, and the use of objects in conjunction with reflection to provide the content. It is a bit like Sweet Chilli Sauce added to Chicken on a bed of rice. So on to the Sweet Chilli Sauce The Sweet Chilli Sauce The request to search for a company based on it’s number is as follows; <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID>1</TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID>????????????????????????????????</SenderID> <Authentication> <Method>CHMD5</Method> <Value>????????????????????????????????</Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber>99999999</PartialCompanyNumber> <DataSet>LIVE</DataSet> <SearchRows>1</SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> This is the XML that we send to the GovTalk Service and we get back a list of companies that match the criteria passed A message is structured in two parts; The envelope which identifies the person sending the request, with the name of the request, and the body which gives the detail of the company we are looking for. The Chilli What makes it possible is the use of XSLT to define the message – and serialization to convert each request object into XML. To start we need to create an object which will represent the contents of the message we are sending. However there is a common properties in all the messages that we send to Companies House. These properties are as follows SenderId – the id of the person sending the message SenderPassword – the password associated with Id TransactionId – Unique identifier for the message AuthenticationValue – authenticates the request Because these properties are unique to the Companies House message, and because they are shared with all messages they are perfect candidates for a base class. The class is as follows; using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Security.Cryptography; using System.Text; using System.Text.RegularExpressions; using Microsoft.WindowsAzure.ServiceRuntime; namespace CompanyHub.Services { public class GovTalkRequest { public GovTalkRequest() { try { SenderID = RoleEnvironment.GetConfigurationSettingValue("SenderId"); SenderPassword = RoleEnvironment.GetConfigurationSettingValue("SenderPassword"); TransactionId = DateTime.Now.Ticks.ToString(); AuthenticationValue = EncodePassword(String.Format("{0}{1}{2}", SenderID, SenderPassword, TransactionId)); } catch (System.Exception ex) { throw ex; } } /// <summary> /// returns the Sender ID to be used when communicating with the GovTalk Service /// </summary> public String SenderID { get; set; } /// <summary> /// return the password to be used when communicating with the GovTalk Service /// </summary> public String SenderPassword { get; set; } // end SenderPassword /// <summary> /// Transaction Id - uses the Time and Date converted to Ticks /// </summary> public String TransactionId { get; set; } // end TransactionId /// <summary> /// calculate the authentication value that will be used when /// communicating with /// </summary> public String AuthenticationValue { get; set; } // end AuthenticationValue property /// <summary> /// encodes password(s) using MD5 /// </summary> /// <param name="clearPassword"></param> /// <returns></returns> public static String EncodePassword(String clearPassword) { MD5CryptoServiceProvider md5Hasher = new MD5CryptoServiceProvider(); byte[] hashedBytes; UTF32Encoding encoder = new UTF32Encoding(); hashedBytes = md5Hasher.ComputeHash(ASCIIEncoding.Default.GetBytes(clearPassword)); String result = Regex.Replace(BitConverter.ToString(hashedBytes), "-", "").ToLower(); return result; } } } There is nothing particularly clever here, except for the EncodePassword method which hashes the value made up of the SenderId, Password and Transaction id. Each message inherits from this object. So for the Company Number Search in addition to the properties above we need a partial number, which dataset to search – for the purposes of the project we only need to search the LIVE set so this can be set in the constructor and the SearchRows. Again all are set as properties. With the SearchRows and DataSet initialized in the constructor. public class CompanyNumberSearchRequest : GovTalkRequest, IDisposable { /// <summary> /// /// </summary> public CompanyNumberSearchRequest() : base() { DataSet = "LIVE"; SearchRows = 1; } /// <summary> /// Company Number to search against /// </summary> public String PartialCompanyNumber { get; set; } /// <summary> /// What DataSet should be searched for the company /// </summary> public String DataSet { get; set; } /// <summary> /// How many rows should be returned /// </summary> public int SearchRows { get; set; } public void Dispose() { DataSet = String.Empty; PartialCompanyNumber = String.Empty; DataSet = "LIVE"; SearchRows = 1; } } As well as inheriting from our base class, I have also inherited from IDisposable – not just because it is just plain good practice to dispose of objects when coding, but it gives also gives us more versatility when using the object. There are four stages in making a request and this is reflected in the four methods we execute in making a call to the Companies House service; Create a request Send a request Check the status If OK then get the results of the request I’ve implemented each of these stages within a static class called Toolbox – which also means I don’t need to create an instance of the class to use it. When making a request there are three stages; Get the template for the message Serialize the object representing the message Transform the serialized object using a predefined XSLT file. Each of my templates I have defined as an embedded resource. When retrieving a resource of this kind we have to include the full namespace to the resource. In making the code re-usable as much as possible I defined the full ‘path’ within the GetRequest method. requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); So we now have the full path of the file within the assembly. Now all we need do is retrieve the assembly and get the resource. asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); Once retrieved  So this can be returned to the calling function and we now have a stream of XSLT to define the message. Time now to serialize the request to create the other side of this message. // Serialize object containing Request, Load into XML Document t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); First off we need the type of the object so we make a call to the GetType method of the object containing the Message properties. Next we need a MemoryStream, XmlSerializer and an XMLTextWriter so these can be initialized. The object is serialized by making the call to the Serialize method of the serializer object. The result of that is then converted into a MemoryStream. That MemoryStream is then converted into a string. ConvertByteArrayToString This is a fairly simple function which uses an ASCIIEncoding object found within the System.Text namespace to convert an array of bytes into a string. public static String ConvertByteArrayToString(byte[] bytes) { System.Text.ASCIIEncoding enc = new System.Text.ASCIIEncoding(); return enc.GetString(bytes); } I only put it into a function because I will be using this in various places. The Sauce When adding support for other messages outside of creating a new object to store the properties of the message, the C# components do not need to change. It is in the XSLT file that the versatility of the technique lies. The XSLT file determines the format of the message. For the CompanyNumberSearch the XSLT file is as follows; <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/"> <GovTalkMessage xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <EnvelopeVersion>1.0</EnvelopeVersion> <Header> <MessageDetails> <Class>NumberSearch</Class> <Qualifier>request</Qualifier> <TransactionID> <xsl:value-of select="CompanyNumberSearchRequest/TransactionId"/> </TransactionID> </MessageDetails> <SenderDetails> <IDAuthentication> <SenderID><xsl:value-of select="CompanyNumberSearchRequest/SenderID"/></SenderID> <Authentication> <Method>CHMD5</Method> <Value> <xsl:value-of select="CompanyNumberSearchRequest/AuthenticationValue"/> </Value> </Authentication> </IDAuthentication> </SenderDetails> </Header> <GovTalkDetails> <Keys/> </GovTalkDetails> <Body> <NumberSearchRequest xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://xmlgw.companieshouse.gov.uk/v1-0/schema/NumberSearch.xsd"> <PartialCompanyNumber> <xsl:value-of select="CompanyNumberSearchRequest/PartialCompanyNumber"/> </PartialCompanyNumber> <DataSet> <xsl:value-of select="CompanyNumberSearchRequest/DataSet"/> </DataSet> <SearchRows> <xsl:value-of select="CompanyNumberSearchRequest/SearchRows"/> </SearchRows> </NumberSearchRequest> </Body> </GovTalkMessage> </xsl:template> </xsl:stylesheet> The outer two tags define that this is a XSLT stylesheet and the root tag from which the nodes are searched for. The GovTalkMessage is the format of the message that will be sent to Companies House. We first set up the XslCompiledTransform object which will transform the XSLT template and the serialized object into the request to Companies House. xslt = new XslCompiledTransform(); resultStream = new MemoryStream(); writer = new XmlTextWriter(resultStream, Encoding.ASCII); doc = new XmlDocument(); The Serialize method require XmlTextWriter to write the XML (writer) and a stream to place the transferred object into (writer). The XML will be loaded into an XMLDocument object (doc) prior to the transformation. // create XSLT Template xslTemplate = Toolbox.GetRequest(Template); xslTemplate.Seek(0, SeekOrigin.Begin); templateReader = XmlReader.Create(xslTemplate); xslt.Load(templateReader); I have stored all the templates as a series of Embedded Resources and the GetRequestCall takes the name of the template and extracts the relevent XSLT file. /// <summary> /// Gets the framwork XML which makes the request /// </summary> /// <param name="RequestFile"></param> /// <returns></returns> public static Stream GetRequest(String RequestFile) { String requestFile = String.Empty; Stream sr = null; Assembly asm = null; try { requestFile = String.Format("CompanyHub.Services.Schemas.{0}", RequestFile); asm = Assembly.GetExecutingAssembly(); sr = asm.GetManifestResourceStream(requestFile); } catch (Exception) { throw; } finally { asm = null; } return sr; } // end private static stream GetRequest We first take the template name and expand it to include the full namespace to the Embedded Resource I like to keep all my schemas in the same directory and so the namespace reflects this. The rest is the default namespace for the project. Then we get the currently executing assembly (which will contain the resources with the call to GetExecutingAssembly() ) Finally we get a stream which contains the XSLT file. We use this stream and then load an XmlReader with the contents of the template, and that is in turn loaded into the XslCompiledTransform object. We convert the object containing the message properties into Xml by serializing it; calling the Serialize() method of the XmlSerializer object. To set up the object we do the following; t = Obj.GetType(); ms = new MemoryStream(); serializer = new XmlSerializer(t); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); We first determine the type of the object being transferred by calling GetType() We create an XmlSerializer object by passing the type of the object being serialized. The serializer writes to a memory stream and that is linked to an XmlTextWriter. Next job is to serialize the object and load it into an XmlDocument. serializer.Serialize(xmlTextWriter, Obj); ms = (MemoryStream)xmlTextWriter.BaseStream; xmlRequest = new XmlTextReader(ms); GovTalkRequest = Toolbox.ConvertByteArrayToString(ms.ToArray()); doc.LoadXml(GovTalkRequest); Time to transform the XML to construct the full request. xslt.Transform(doc, writer); resultStream.Seek(0, SeekOrigin.Begin); request = Toolbox.ConvertByteArrayToString(resultStream.ToArray()); So that creates the full request to be sent  to Companies House. Sending the request So far we have a string with a request for the Companies House service. Now we need to send the request to the Companies House Service. Configuration within an Azure project There are entire blog entries written about configuration within an Azure project – most of this is out of scope for this article but the following is a summary. Configuration is defined in two files within the parent project *.csdef which contains the definition of configuration setting. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WebRole name="CompanyHub.Host"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="80" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="DataConnectionString" /> </ConfigurationSettings> </WebRole> <WebRole name="CompanyHub.Services"> <InputEndpoints> <InputEndpoint name="HttpIn" protocol="http" port="8080" /> </InputEndpoints> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> <Setting name="SenderId"/> <Setting name="SenderPassword" /> <Setting name="GovTalkUrl"/> </ConfigurationSettings> </WebRole> <WorkerRole name="CompanyHub.Worker"> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" /> </ConfigurationSettings> </WorkerRole> </ServiceDefinition>   Above is the configuration definition from the project. What we are interested in however is the ConfigurationSettings tag of the CompanyHub.Services WebRole. There are four configuration settings here, but at the moment we are interested in the second to forth settings; SenderId, SenderPassword and GovTalkUrl The value of these settings are defined in the ServiceDefinition.cscfg file; <?xml version="1.0"?> <ServiceConfiguration serviceName="OnlineCompanyHub" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration"> <Role name="CompanyHub.Host"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="DataConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> <Role name="CompanyHub.Services"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="SenderId" value="UserID"/> <Setting name="SenderPassword" value="Password"/> <Setting name="GovTalkUrl" value="http://xmlgw.companieshouse.gov.uk/v1-0/xmlgw/Gateway"/> </ConfigurationSettings> </Role> <Role name="CompanyHub.Worker"> <Instances count="2" /> <ConfigurationSettings> <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" /> </ConfigurationSettings> </Role> </ServiceConfiguration>   Look for the Role tag that contains our project name (CompanyHub.Services). Having configured the parameters we can now transmit the request. This is done by ‘POST’ing a stream of XML to the Companies House servers. govTalkUrl = RoleEnvironment.GetConfigurationSettingValue("GovTalkUrl"); request = WebRequest.Create(govTalkUrl); request.Method = "POST"; request.ContentType = "text/xml"; writer = new StreamWriter(request.GetRequestStream()); writer.WriteLine(RequestMessage); writer.Close(); We use the WebRequest object to send the object. Set the method of sending to ‘POST’ and the type of data as text/xml. Once set up all we do is write the request to the writer – this sends the request to Companies House. Did the Request Work Part I – Getting the response Having sent a request – we now need the result of that request. response = request.GetResponse(); reader = response.GetResponseStream(); result = Toolbox.ConvertByteArrayToString(Toolbox.ReadFully(reader));   The WebRequest object has a GetResponse() method which allows us to get the response sent back. Like many of these calls the results come in the form of a stream which we convert into a string. Did the Request Work Part II – Translating the Response Much like XSLT and XML were used to create the original request, so it can be used to extract the response and by deserializing the result we create an object that contains the response. Did it work? It would be really great if everything worked all the time. Of course if it did then I don’t suppose people would pay me and others the big bucks so that our programmes do not a) Collapse in a heap (this is an area of memory) b) Blow every fuse in the place in a shower of sparks (this will probably not happen this being real life and not a Hollywood movie, but it was possible to blow the sound system of a BBC Model B with a poorly coded setting) c) Go nuts and trap everyone outside the airlock (this was from a movie, and unless NASA get a manned moon/mars mission set up unlikely to happen) d) Go nuts and take over the world (this was also from a movie, but please note life has a habit of being of exceeding the wildest imaginations of Hollywood writers (note writers – Hollywood executives have no imagination and judging by recent output of that town have turned plagiarism into an art form). e) Freeze in total confusion because the cleaner pulled the plug to the internet router (this has happened) So anyway – we need to check to see if our request actually worked. Within the GovTalk response there is a section that details the status of the message and a description of what went wrong (if anything did). I have defined an XSLT template which will extract these into an XML document. <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <xsl:template match="/"> <GovTalkStatus xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Status> <xsl:value-of select="ev:GovTalkMessage/ev:Header/ev:MessageDetails/ev:Qualifier"/> </Status> <Text> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Text"/> </Text> <Location> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Location"/> </Location> <Number> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Number"/> </Number> <Type> <xsl:value-of select="ev:GovTalkMessage/ev:GovTalkDetails/ev:GovTalkErrors/ev:Error/ev:Type"/> </Type> </GovTalkStatus> </xsl:template> </xsl:stylesheet>   Only thing different about previous XSL files is the references to two namespaces ev & gt. These are defined in the GovTalk response at the top of the response; xsi:schemaLocation="http://www.govtalk.gov.uk/CM/envelope http://xmlgw.companieshouse.gov.uk/v1-0/schema/Egov_ch-v2-0.xsd" xmlns="http://www.govtalk.gov.uk/CM/envelope" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:gt="http://www.govtalk.gov.uk/schemas/govtalk/core" If we do not put these references into the XSLT template then  the XslCompiledTransform object will not be able to find the relevant tags. Deserialization is a fairly simple activity. encoder = new ASCIIEncoding(); ms = new MemoryStream(encoder.GetBytes(statusXML)); serializer = new XmlSerializer(typeof(GovTalkStatus)); xmlTextWriter = new XmlTextWriter(ms, Encoding.ASCII); messageStatus = (GovTalkStatus)serializer.Deserialize(ms);   We set up a serialization object using the object type containing the error state and pass to it the results of a transformation between the XSLT above and the GovTalk response. Now we have an object containing any error state, and the error message. All we need to do is check the status. If there is an error then we can flag an error. If not then  we extract the results and pass that as an object back to the calling function. We go this by guess what – defining an XSLT template for the result and using that to create an Xml Stream which can be deserialized into a .Net object. In this instance the XSLT to create the result of a Company Number Search is; <?xml version="1.0" encoding="us-ascii"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ev="http://www.govtalk.gov.uk/CM/envelope" xmlns:sch="http://xmlgw.companieshouse.gov.uk/v1-0/schema" exclude-result-prefixes="ev"> <xsl:template match="/"> <CompanySearchResult xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <CompanyNumber> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyNumber"/> </CompanyNumber> <CompanyName> <xsl:value-of select="ev:GovTalkMessage/ev:Body/sch:NumberSearch/sch:CoSearchItem/sch:CompanyName"/> </CompanyName> </CompanySearchResult> </xsl:template> </xsl:stylesheet> and the object definition is; using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace CompanyHub.Services { public class CompanySearchResult { public CompanySearchResult() { CompanyNumber = String.Empty; CompanyName = String.Empty; } public String CompanyNumber { get; set; } public String CompanyName { get; set; } } } Our entire code to make calls to send a request, and interpret the results are; String request = String.Empty; String response = String.Empty; GovTalkStatus status = null; fault = null; try { using (CompanyNumberSearchRequest requestObj = new CompanyNumberSearchRequest()) { requestObj.PartialCompanyNumber = CompanyNumber; request = Toolbox.CreateRequest(requestObj, "CompanyNumberSearch.xsl"); response = Toolbox.SendGovTalkRequest(request); status = Toolbox.GetMessageStatus(response); if (status.Status.ToLower() == "error") { fault = new HubFault() { Message = status.Text }; } else { Object obj = Toolbox.GetGovTalkResponse(response, "CompanyNumberSearchResult.xsl", typeof(CompanySearchResult)); } } } catch (FaultException<ArgumentException> ex) { fault = new HubFault() { FaultType = ex.Detail.GetType().FullName, Message = ex.Detail.Message }; } catch (System.Exception ex) { fault = new HubFault() { FaultType = ex.GetType().FullName, Message = ex.Message }; } finally { } Wrap up So there we have it – a reusable set of functions to send and interpret XML results from an internet based service. The code is reusable with a little change with any service which uses XML as a transport mechanism – and as for the Companies House GovTalk service all I need to do is create various objects for the result and message sent and the relevent XSLT files. I might need minor changes for other services but something like 70-90% will be exactly the same.

    Read the article

  • Programação paralela no .NET Framework 4 – Parte I

    - by anobre
    Introdução O avanço de tecnologia nos últimos anos forneceu, a baixo custo, acesso  a workstations com inúmeros CPUs. Facilmente encontramos hoje máquinas clientes com 2, 4 e até 8 núcleos, sem considerar os “super-servidores” com até 36 processadores :) Da wikipedia: A Unidade central de processamento (CPU, de acordo com as iniciais em inglês) ou o processador é a parte de um sistema de computador que executa as instruções de um programa de computador, e é o elemento primordial na execução das funções de um computador. Este termo tem sido usado na indústria de computadores pelo menos desde o início dos anos 1960[1]. A forma, desenho e implementação de CPUs têm mudado dramaticamente desde os primeiros exemplos, mas o seu funcionamento fundamental permanece o mesmo. Fazendo uma analogia, seria muito interessante delegarmos tarefas no mundo real que podem ser executadas independentemente a pessoas diferentes, atingindo desta forma uma  maior performance / produtividade na sua execução. A computação paralela se baseia na idéia que um problema maior pode ser dividido em problemas menores, sendo resolvidos de forma paralela. Este pensamento é utilizado há algum tempo por HPC (High-performance computing), e através das facilidades dos últimos anos, assim como a preocupação com consumo de energia, tornaram esta idéia mais atrativa e de fácil acesso a qualquer ambiente. No .NET Framework A plataforma .NET apresenta um runtime, bibliotecas e ferramentas para fornecer uma base de acesso fácil e rápido à programação paralela, sem trabalhar diretamente com threads e thread pool. Esta série de posts irá apresentar todos os recursos disponíveis, iniciando os estudos pela TPL, ou Task Parallel Library. Task Parallel Library A TPL é um conjunto de tipos localizados no namespace System.Threading e System.Threading.Tasks, a partir da versão 4 do framework. A partir da versão 4 do framework, o TPL é a maneira recomendada para escrever código paralelo e multithreaded. http://msdn.microsoft.com/en-us/library/dd460717(v=VS.100).aspx Task Parallelism O termo “task parallelism”, ou em uma tradução live paralelismo de tarefas, se refere a uma ou mais tarefas sendo executadas de forma simultanea. Considere uma tarefa como um método. A maneira mais fácil de executar tarefas de forma paralela é o código abaixo: Parallel.Invoke(() => TrabalhoInicial(), () => TrabalhoSeguinte()); O que acontece de verdade? Por trás nos panos, esta instrução instancia de forma implícita objetos do tipo Task, responsável por representar uma operação assíncrona, não exatamente paralela: public class Task : IAsyncResult, IDisposable É possível instanciar Tasks de forma explícita, sendo uma alternativa mais complexa ao Parallel.Invoke. var task = new Task(() => TrabalhoInicial()); task.Start(); Outra opção de instanciar uma Task e já executar sua tarefa é: var t = Task<int>.Factory.StartNew(() => TrabalhoInicialComValor());var t2 = Task<int>.Factory.StartNew(() => TrabalhoSeguinteComValor()); A diferença básica entre as duas abordagens é que a primeira tem início conhecido, mais utilizado quando não queremos que a instanciação e o agendamento da execução ocorra em uma só operação, como na segunda abordagem. Data Parallelism Ainda parte da TPL, o Data Parallelism se refere a cenários onde a mesma operação deva ser executada paralelamente em elementos de uma coleção ou array, através de instruções paralelas For e ForEach. A idéia básica é pegar cada elemento da coleção (ou array) e trabalhar com diversas threads concomitantemente. A classe-chave para este cenário é a System.Threading.Tasks.Parallel // Sequential version foreach (var item in sourceCollection) { Process(item); } // Parallel equivalent Parallel.ForEach(sourceCollection, item => Process(item)); Complicado né? :) Demonstração Acesse aqui um vídeo com exemplos (screencast). Cuidado! Apesar da imensa vontade de sair codificando, tome cuidado com alguns problemas básicos de paralelismo. Neste link é possível conhecer algumas situações. Abraços.

    Read the article

  • WPF MVVM UserControl Binding "Container", dispose to avoid memory leak.

    - by user178657
    For simplicity. I have a window, with a bindable usercontrol <UserControl Content="{Binding Path = BindingControl, UpdateSourceTrigger=PropertyChanged}"> I have two user controls, ControlA, and ControlB, Both UserControls have their own Datacontext ViewModel, ControlAViewModel and ControlBViewModel. Both ControlAViewModel and ControlBViewModel inh. from a "ViewModelBase" public abstract class ViewModelBase : DependencyObject, INotifyPropertyChanged, IDisposable........ Main window was added to IoC... To set the property of the Bindable UC, i do ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlA; ControlA, in its Datacontext, creates a DispatcherTimer, to do "somestuff".. Later on., I need to navigate elsewhere, so the other usercontrol is loaded into the container ComponentRepository.Resolve<MainViewWindow>().Bindingcontrol= new ControlB If i put a break point in the "someStuff" that was in ControlA's datacontext. The DispatcherTimer is still running... i.e. loading a new usercontrol into the bindable Usercontrol on mainwindow does not dispose/close/GC the DispatcherTimer that was created in the DataContext View Model... Ive looked around, and as stated by others, dispose doesnt get called because its not supposed to... :) Not all my usercontrols have DispatcherTimer, just a few that need to do some sort of "read and refresh" updates./. Should i track these DispatcherTimer objects in the ViewModelBase that all Usercontrols inh. and manually stop/dispose them everytime a new usercontrol is loaded? Is there a better way?

    Read the article

  • ASP.Net Entity Framework Repository & Linq

    - by Chris Klepeis
    My scenario: This is an ASP.NET 4.0 web app programmed via C# I implement a repository pattern. My repositorys all share the same ObjectContext, which is stored in httpContext.Items. Each repository creates a new ObjectSet of type E. Heres some code from my repository: public class Repository<E> : IRepository<E>, IDisposable where E : class { private DataModelContainer _context = ContextHelper<DataModelContainer>.GetCurrentContext(); private IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery() { return objectSet; } Lets say I have 2 repositorys, 1 for states and 1 for countrys and want to create a linq query against both. Note that I use POCO classes with the entity framework. State and Country are 2 of these POCO classes. Repository stateRepo = new Repository<State>(); Repository countryRepo = new Repository<Country>(); IEnumerable<State> states = (from s in _stateRepo.GetQuery() join c in _countryRepo.GetQuery() on s.countryID equals c.countryID select s).ToList(); Debug.WriteLine(states.First().Country.country) essentially, I want to retrieve the state and the related country entity. The query only returns the state data... and I get a null argument exception on the Debug.WriteLine LazyLoading is disabled in my .edmx... thats the way I want it.

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • problem disposing class in Dictionary it is Still in the heap memory although using GC.Collect

    - by Bahgat Mashaly
    Hello i have a problem disposing class in Dictionary this is my code private Dictionary<string, MyProcessor> Processors = new Dictionary<string, MyProcessor>(); private void button1_Click(object sender, EventArgs e) { if (!Processors.ContainsKey(textBox1.Text)) { Processors.Add(textBox1.Text, new MyProcessor()); } } private void button2_Click(object sender, EventArgs e) { MyProcessor currnt_processor = Processors[textBox1.Text]; Processors.Remove(textBox2.Text); currnt_processor.Dispose(); currnt_processor = null; GC.Collect(); } public class MyProcessor: IDisposable { private bool isDisposed = false; string x = ""; public MyProcessor() { for (int i = 0; i < 20000; i++) { //this line only to increase the memory usage to know if the class is dispose or not x = x + "gggggggggggg"; } this.Dispose(); GC.SuppressFinalize(this); } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } public void Dispose(bool disposing) { if (!this.isDisposed) { isDisposed = true; this.Dispose(); } } ~MyProcessor() { Dispose(false); } } i use "ANTS Memory Profiler" to monitor heap memory the disposing work only when i remove all keys from dictionary how can i destroy the class from heap memory ? thanks in advance

    Read the article

  • Why is ExecuteFunction method only available through base.ExecuteFunction in a child class of Object

    - by Matt
    I'm trying to call ObjectContext.ExecuteFunction from my objectcontext object in the repository of my site. The repository is generic, so all I have is an ObjectContext object, rather than one that actually represents my specific one from the Entity Framework. Here's an example of code that was generated that uses the ExecuteFunction method: [global::System.CodeDom.Compiler.GeneratedCode("System.Data.Entity.Design.EntityClassGenerator", "4.0.0.0")] public global::System.Data.Objects.ObjectResult<ArtistSearchVariation> FindSearchVariation(string source) { global::System.Data.Objects.ObjectParameter sourceParameter; if ((source != null)) { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", source); } else { sourceParameter = new global::System.Data.Objects.ObjectParameter("Source", typeof(string)); } return base.ExecuteFunction<ArtistSearchVariation>("FindSearchVariation", sourceParameter); } But what I would like to do is something like this... public class Repository<E, C> : IRepository<E, C>, IDisposable where E : EntityObject where C : ObjectContext { private readonly C _ctx; // ... public ObjectResult<E> ExecuteFunction(string functionName, params[]) { // Create object parameters return _ctx.ExecuteFunction<E>(functionName, /* parameters */) } } Anyone know why I have to call ExecuteFunction from base instead of _ctx? Also, is there any way to do something like I've written out? I would really like to keep my repository generic, but with having to execute stored procedures it's looking more and more difficult... Thanks, Matt

    Read the article

  • Pronunciation of programming structures (particularly in c#)

    - by Andrzej Nosal
    As a non-English speaking person I often have problems pronouncing certain programming structures and abbreviations. I've been watching some video tutorials and listening to podcasts as well, though I couldn't catch them all. My question is what is the common or correct pronunciation of the following code snippets? Generics, like IEnumerable<int> or in a method void Swap<T>(T lhs, T rhs) Collections indexing and indexer access e.g. garage[i], rectangular arrays myArray[2,1] or jagged[1][2][3] Lambda operator =>, e.g. in a where extension method .Where(animal => animal.Color == Color.Brown) or in an anonymous method () => { return false;} Inheritance class Derived : Base (extends?) class SomeClass : IDisposable (implements?) Arithemtic operators += -= *= /= %= ! Are += and -= pronounced the same for events? Collections initializers new int[] { 4, 5, 8, 9, 12, 13, 16, 17 }; Casting MyEnum foo = (MyEnum)(int)yourFloat; (as?) Nullables DateTime? dt = new DateTime?(); I tagged the question with C# as some of them are specific to C# only.

    Read the article

  • bug in NHunspell spelling checker

    - by Nikhil K
    I am using NHunspell for checking spelling.I added NHunspell.dll as a reference to my asp.net page.I added the namespace System.NHunspell. The problem i am facing is related to IDisposible. I put the downloaded code of NHunspell inside the button event. protected void Button1_Click(object sender, EventArgs e) { using (Hunspell hunspell = new Hunspell("en_us.aff", "en_us.dic")) { bool correct = hunspell.Spell("Recommendation"); var suggestions = hunspell.Suggest("Recommendatio"); foreach (string suggestion in suggestions) { Console.WriteLine("Suggestion is: " + suggestion); } } // Hyphen using (Hyphen hyphen = new Hyphen("hyph_en_us.dic")) { var hyphenated = hyphen.Hyphenate("Recommendation"); } * using (MyThes thes = new MyThes("th_en_us_new.idx", "th_en_us_new.dat")) { using (Hunspell hunspell = new Hunspell("en_us.aff", "en_us.dic")) { ThesResult tr = thes.Lookup("cars", hunspell); foreach (ThesMeaning meaning in tr.Meanings) { Console.WriteLine(" Meaning: " + meaning.Description); foreach (string synonym in meaning.Synonyms) { Console.WriteLine(" Synonym: " + synonym); } } } } The * shown above is the line of error.The error is: " type used in a using statement must be implicitly convertible to 'System.IDisposable'". Also there is a warning on that line :"'NHunspell.MyThes.MyThes(string, string)' is obsolete: 'idx File is not longer needed, MyThes works completely in memory'"; Can any one help me to correct this???

    Read the article

  • Unable to use stored procs in a generic repository for the entity framework. (ASP.NET MVC 2)

    - by Matt
    Hi, I have a generic repository that uses the entity framework to manipulate the database. The original code is credited to Morshed Anwar's post on CodeProject. I've taken his code and modified is slightly to fit my needs. Unfortunately I'm unable to call an Imported Function because the function is only recognized for my specific instance of the entity framework public class Repository<E, C> : IRepository<E, C>, IDisposable where E : EntityObject where C : ObjectContext { private readonly C _ctx; public ObjectResult<MyObject> CallFunction() { // This does not work because "CallSomeImportedFunction" only works on // My object. I also cannot cast it (TrackItDBEntities)_ctx.CallSomeImportedFunction(); // Not really sure why though... but either way that kind of ruins the genericness off it. return _ctx.CallSomeImportedFunction(); } } Anyone know how I can make this work with the repository I already have? Or does anyone know a generic way of calling stored procedures with entity framework? Thanks, Matt

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11  | Next Page >