Search Results

Search found 3061 results on 123 pages for 'interfaces'.

Page 101/123 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • GNU C++ how to check when -std=c++0x is in effect?

    - by TerryP
    My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that. /usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture! Using my system compiler (g++ 4.2 default dialect): #include <tr1/foo> using std::tr1::foo; Using newer (4.5) versions of the compiler with -std=c++0x: #include <foo> using std::foo; Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled? Something like this is what I'm looking for: #ifdef __CXX0X_MODE__ #endif but I have not found anything in the manual or off the web. At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • How do you submit an authenticated HTML form using XUL (Firefox extension) Javascript?

    - by machineghost
    I am working on a Firefox extension, and in that extension I am trying to use AJAX to submit a form on a webpage. I am using: var request = Components.classes["@mozilla.org/xmlextras/xmlhttprequest;1"].createInstance(Components.interfaces.nsIXMLHttpRequest); request.onload = loadHandler; request.open("POST", url, true); request.send(values); to make the request, and it works ... mostly. The one problem is that the form has an authentication token on it, and I need to submit that token with my POST. I tried doing a GET separately to get this token, but by the time I made my second (POST) request my session had (evidently) changed, and the authenticity token was considered invalid. Does anyone know of a way to use the XUL/Chrome Javscript to maintain a constant session across multiple requests (all "behind the scenes") for something this? I'm still a XUL n00b, so there may be a totally obvious alternative that I'm missing (eg. hidden IFRAME; I tried that briefly but couldn't get it to work).

    Read the article

  • Can an interface define the signature of a c#-class

    - by happyclicker
    I have a .net-app that provides a mechanism to extend the app with plugins. Each plugin must implement a plugin-interface and must provide furthermore a constructor that receives one parameter (a resource context). During the instantiation of the plugin-class I look via reflection, if the needed constructor exists and if yes, I instantiate the class (via Reflection). If the constructor does not exists, I throw an exception that says that the plugin not could be created, because the desired constructor is not available. My question is, if there is a way to declare the signature of a constructor in the plugin-interface so that everyone that implements the plugin-interface must also provide a constructor with the desired signature. This would ease the creation of plugins. I don’t think that such a possibility exists because I think such a feature falls not in the main purpose for what interfaces were designed for but perhaps someone knows a statement that does this, something like: public interface IPlugin { ctor(IResourceContext resourceContext); int AnotherPluginFunction(); } I want to add that I don't want to change the constructor to be parameterless and then set the resource-context through a property, because this will make the creation of plugins much more complicated. The persons that write plugins are not persons with deep programming experience. The plugins are used to calculate statistical data that will be visualized by the app.

    Read the article

  • ADO.NET: Can't connect to mdf database file

    - by Nabo
    I'm writing an application that uses a SQL Server 2005 database. In the connection string i'm specifying the mdf file like this: connstr = @"Data Source=.\SQLEXPRESS; AttachDbFilename=" + fileLocation + "; Integrated Security=True; User Instance=True"; When i execute the code: public static void forceConnection() { try { conn = new SqlConnection(connstr); conn.Open(); } catch (Exception e) { MessageBox.Show(e.Message, "Erro", MessageBoxButtons.OK, MessageBoxIcon.Error); } finally { if(conn != null) conn.Close(); } } I receive an exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) This code works on XP but not in Vista.. I tryed run Visual Studio in admin mode and moved the mdf file to "user data" folders but the error persists.. Any help? Thanks!

    Read the article

  • Does F# documentation have a way to search for functions by their types?

    - by Nathan Sanders
    Say I want to know if F# has a library function of type ('T -> bool) -> 'T list -> int ie, something that counts how many items of a list that a function returns true for. (or returns the index of the first item that returns true) I used to use the big list at the MSR site for F# before the documentation on MSDN was ready. I could just search the page for the above text because the types were listed. But now the MSDN documentation only lists types on the individual pages--the module page is a mush of descriptive text. Google kinda-sorta works, but it can't help with // compatible interfaces ('T -> bool) -> Seq<'T> -> int // argument-swaps Seq<'T> -> ('T -> bool) -> int // type-variable names ('a -> bool) -> Seq<'a> -> int // wrappers ('a -> bool) -> 'a list -> option<int> // uncurried versions ('T -> bool) * 'T list -> int // .NET generic syntax ('T -> bool) -> List<'T> -> int // methods List<'T> member : ('T -> bool) -> int Haskell has a standalone program for this called Hoogle. Does F# have an equivalent, like Fing or something?

    Read the article

  • MEF GetExports<T, TMetaDataView> returning nothing with AllowMultiple = True

    - by sohum
    I don't understand MEF very well, so hopefully this is a simple fix of how I think it works. I'm trying to use MEF to get some information about a class and how it should be used. I'm using the Metadata options to try to achieve this. My interfaces and attribute looks like this: public interface IMyInterface { } public interface IMyInterfaceInfo { Type SomeProperty1 { get; } double SomeProperty2 { get; } string SomeProperty3 { get; } } [MetadataAttribute] [AttributeUsage(AttributeTargets.Class, AllowMultiple = true)] public class ExportMyInterfaceAttribute : ExportAttribute, IMyInterfaceInfo { public ExportMyInterfaceAttribute(Type someProperty1, double someProperty2, string someProperty3) : base(typeof(IMyInterface)) { SomeProperty1 = someProperty1; SomeProperty2 = someProperty2; SomeProperty3 = someProperty3; } public Type SomeProperty1 { get; set; } public double SomeProperty2 { get; set; } public string SomeProperty3 { get; set; } } The class that is decorated with the attribute looks like this: [ExportMyInterface(typeof(string), 0.1, "whoo data!")] [ExportMyInterface(typeof(int), 0.4, "asdfasdf!!")] public class DecoratedClass : IMyInterface { } The method that is trying to use the import looks like this: private void SomeFunction() { // CompositionContainer is an instance of CompositionContainer var myExports = CompositionContainer.GetExports<IMyInterface, IMyInterfaceInfo>(); } In my case myExports is always empty. In my CompositionContainer, I have a Part in my catalog that has two ExportDefinitions, both with the following ContractName: "MyNamespace.IMyInterface". The Metadata is also loaded correctly per my exports. If I remove the AllowMultiple setter and only include one exported attribute, the myExports variable now has the single export with its loaded metadata. What am I doing wrong?

    Read the article

  • Creating a web application that can be extended by plugins/modules

    - by Adam Pope
    I'm currently involved with developing a C# CMS-like web application which will be used to standardise our development of websites. From the outset, the idea has been to keep the core as simple as possible to avoid the complexity and menu/option overload that blights many CMS systems. This simple core is now complete and working very well. We envisisaged that the system would be able to accept plugins or modules which would extend the core functionality to suit a given projects needs. These would also be re-usable across projects. For example, a basic catalogue and shopping basket might be needed. All the code for such extensions should be in seperate assemblies. They should be able to provide their own admin interfaces and front-end code from this library. The system should search for available plugins and give the admin user the option to enable/disable the feature. (This is all very much like WordPress plugins) It is crucial that we attack this problem in the correct way, so I'm trying to perform as much due dilligence as possible before jumping in. I am aware of the Plugin Pattern (http://msdn.microsoft.com/en-us/library/ms972962.aspx) and have read some articles on it's use. It seems reasonable but I'm not convinced it's necessarily the correct/best technique for this situation. It seems more suited to processing applications (image/audio manipulation, maths etc). Are there any other options for achieving this kind of UI extensibility functionality? Or is the plugin pattern the way to go? I'd also be interested if anybody has links to articles that explain using the plugin pattern for this purpose?

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • WCF Multiple contracts with duplicate method names

    - by haxelit
    Hello, I have a service with multiple contracts like so. [ServiceContract] public partial interface IBusinessFunctionDAO { [OperationContract] BusinessFunction GetBusinessFunction(Int32 businessFunctionRefID); [OperationContract] IEnumerable<Project> GetProjects(Int32 businessFunctionRefID); } [ServiceContract] public partial interface IBusinessUnitDAO { [OperationContract] BusinessUnit GetBusinessUnit(Int32 businessUnitRefID); [OperationContract] IEnumerable<Project> GetProjects(Int32 businessUnitRefID); } I then explicitly implemented each one of the interfaces like so. public class TrackingTool : IBusinessFunctionDAO, IBusinessUnitDAO { BusinessFunction IBusinessFunctionDAO.GetBusinessFunction(Int32 businessFunctionRefID) { // implementation } IEnumerable<Project> IBusinessFunctionDAO.GetProjects(Int32 businessFunctionRefID) { // implementation } BusinessUnit IBusinessUnitDAO.GetBusinessUnit(Int32 businessUnitRefID) { // implementation } IEnumerable<Project> IBusinessUnitDAO.GetProjects(Int32 businessUnitRefID) { // implementation } } As you can see I have two GetProjects(int) methods, but each one is implemented explicitly so this compiles just fine and is perfectly valid. The problem arises when I actually start this as a service. It gives me an error staying that TrackingTool already contains a definition GetProject. While it is true, it is part of a different service contract. Does WCF not distinguish between service contracts when generating the method names ? Is there a way to get it to distinguish between the service contracts ? My App.Config looks like this <service name="TrackingTool"> <endpoint address="BusinessUnit" contract="IBusinessUnitDAO" /> <endpoint address="BusinessFunction" contract="IBusinessFunctionDAO" /> </service> Any help would be appreciated. Thanks, Raul

    Read the article

  • How does PHP interface with Apache?

    - by Sbm007
    Hi, I've almost finished writing a HTTP/1.0 compliant web server under Java (no commercial usage as such, this is just for fun) and basically I want to include PHP support. I realize that this is no easy task at all, but I think it'll be a nice accomplishment. So I want to know how PHP exactly interfaces with the Apache web server (or any other web server really), so I can learn from it and write my own PHP wrapper. It doesn't necessarily have to be mod_php, I don't mind writing a FastCGI wrapper - which to my knowledge is capable of running PHP as well. I would've thought that all that PHP needs is the output that goes to client (so it can interpret the PHP parts), the full HTTP request from client (so it can extract POST variables and such) and the client's host name. And then you simply take the parsed PHP code and write that to the output stream. There will probably be more things, but in essence that's how I would have thought it works. From what I've gathered so far, apache2handler provides an API which PHP makes use of to 'connect' to Apache. I guess it's an idea to look at the source code for apache2handler and php5apache2.dll or so, but before I do that I thought I'd ask SO first. If anyone has more information, experience, or some sort of specification that is relevant to this then please let me know. Thanks in advance!

    Read the article

  • how do I best create a set of list classes to match my business objects

    - by ken-forslund
    I'm a bit fuzzy on the best way to solve the problem of needing a list for each of my business objects that implements some overridden functions. Here's the setup: I have a baseObject that sets up database, and has its proper Dispose() method All my other business objects inherit from it, and if necessary, override Dispose() Some of these classes also contain arrays (lists) of other objects. So I create a class that holds a List of these. I'm aware I could just use the generic List, but that doesn't let me add extra features like Dispose() so it will loop through and clean up. So if I had objects called User, Project and Schedule, I would create UserList, ProjectList, ScheduleList. In the past, I have simply had these inherit from List< with the appropriate class named and then written the pile of common functions I wanted it to have, like Dispose(). this meant I would verify by hand, that each of these List classes had the same set of methods. Some of these classes had pretty simple versions of these methods that could have been inherited from a base list class. I could write an interface, to force me to ensure that each of my List classes has the same functions, but interfaces don't let me write common base functions that SOME of the lists might override. I had tried to write a baseObjectList that inherited from List, and then make my other Lists inherit from that, but there are issues with that (which is really why I came here). One of which was trying to use the Find() method with a predicate. I've simplified the problem down to just a discussion of Dispose() method on the list that loops through and disposes its contents, but in reality, I have several other common functions that I want all my lists to have. What's the best practice to solve this organizational matter?

    Read the article

  • Compile for mixed platform (32, 64) and reference a 32 or 64 bit DLL resolved at runtime

    - by Nigel Aston
    Using VS2010 under windows 32 or 64 bit. Our C# app calls a 3rd party DLL (managed) that interfaces to an unmanaged DLL. The 3rd party DLL API appears identical in 32 or 64 bit although underneath it links to a 32 or 64 bit unmanaged DLL. We want our C# app to run on either 32 or 64 bit OS, ideally it will auto detect the OS and load the appropriate 32rd party DLL - via a simple factory class which tests the Enviroment. So the neatest solution would be a runtime folder containing: OurApp.exe 3rdParty32.DLL 3rdPartyUnmanaged32.DLL 3rdParty64.DLL 3rdPartyUnmanaged64.DLL However, the interface for the managed 3rdParty 32 and 64 dll is identical so both cannot be referenced within the same VS2010 project: when adding the second the warning triangle is shown and it does not get referenced. Is my only answer to create two extra library DLL projects to reference the 3rdParty 32 and 64 Dlls? So I would end up with this project arrangement: Project 1: Builds OurApp.exe, dynamically creates an object for project2 or project3. Project 2: Builds OurApp32.DLL which references 3rdParty32.dll Project 3: Builds OurApp64.DLL which references 3rdParty64.dll

    Read the article

  • Is there a way to load an existing connection string for Linq to SQL from an app.config file?

    - by Brian Surowiec
    I'm running into a really annoying problem with my Linq to SQL project. When I add everything in under the web project everything goes as expected and I can tell it to use my existing connection string stored in the web.config file and the Linq code pulls directly from the ConfigurationManager. This all turns ugly once I move the code into its own project. I’ve created an app.config file, put the connection string in there as it was in the web.config but when I try to add another table in the IDE keeps forcing me to either hardcode the connection string or creates a Settings file and puts it in there, which then adds a new entry into the app.config file with a new name. Is there a way keep my Linq code in its own project yet still refer back to my config file without the IDE continuously hardcoding the connection string or creating the Settings file? I’m converting part of my DAL over to use Linq to SQL so I’d like to use the existing connection string that our old code is using as well as keep the value in a common location, and one spot, instead of in a number of spots. Manually changing the mode to WebSettings instead of AppSettings works untill I try to add a new table, then it goes back to hardcoding the value or recreating the Settings file. I also tried to switch the project type to be a web project and then rename my app.config to web.config and then everything works as I’d like it to. I’m just not sure if there are any downfalls to keeping this as a web project since it really isn't one. The project only contains the Linq to SQL code and an implementation of my repository classes. My project layout looks like this Website -connectionString.config -web.config (refers to connectionString.config) Middle Tier -Business Logic -Repository Interfaces -etc. DAL -Linq to SQL code -Existing SPROC code -connectionString.config (linked from the web poject) -app.config (refers to connectionString.config)

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Multiple leaf methods problem in composite pattern

    - by Ondrej Slinták
    At work, we are developing an PHP application that would be later re-programmed into Java. With some basic knowledge of Java, we are trying to design everything to be easily re-written, without any headaches. Interesting problem came out when we tried to implement composite pattern with huge number of methods in leafs. What are we trying to achieve (not using interfaces, it's just an example): class Composite { ... } class LeafOne { public function Foo( ); public function Moo( ); } class LeafTwo { public function Bar( ); public function Baz( ); } $c = new Composite( Array( new LeafOne( ), new LeafTwo( ) ) ); // will call method Foo in all classes in composite that contain this method $c->Foo( ); It seems like pretty much classic Composite pattern, but problem is that we will have quite many leaf classes and each of them might have ~5 methods (of which few might be different than others). One of our solutions, which seems to be the best one so far and might actually work, is using __call magic method to call methods in leafs. Unfortunately, we don't know if there is an equivalent of it in Java. So the actual question is: Is there a better solution for this, using code that would be eventually easily re-coded into Java? Or do you recommend any other solution? Perhaps there's some different, better pattern I could use here. In case there's something unclear, just ask and I'll edit this post.

    Read the article

  • Ensuring quality of your software and code

    - by Filip Ekberg
    When I usually write code I follow some guidelines to ensure that my code has a certain standard and I as any other developer try to ensure that my code and software is of quality. Try to focus on the programming and not the understanding of the domain or any other pre-programming steps. These are the following steps I live by: Writing unit tests Make it fail ( no code ) Make it Work ( working code ) Analysing abstraction Extracting methods Exteract interfaces Refactoring In addition to the above which is a part of refactoring, I also try to refactor the code with good tools such as ReSharper, CodeRush or others. The question; What is the next step? Commenting the code is trivial and shouldn't even have to be mentioned, but updated comments and xml-comments where it's needed / everywhere is something that I try to have. But all the above helps he ensure that other developers might understand my code, that the code has some sort of quality and follows naming standards. It does however not ensure any product quality. I am looking for tools for post-development quality ensurance, such as profilers and how one would use these tools to increase product quality.

    Read the article

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • Imlpementations of an Interface with Different Types?

    - by b3njamin
    Searched as best I could but unfortunately I've learned nothing relevant; basically I'm trying to work around the following problem in C#... For example, I have three possible references (refA, refB, refC) and I need to load the correct one depending on a configuration option. So far however I can't see a way of doing it that doesn't require me to use the name of said referenced object all through the code (the referenced objects are provided, I can't change them). Hope the following code makes more sense: public ??? LoadedClass; public Init() { /* load the object, according to which version we need... */ if (Config.Version == "refA") { Namespace.refA LoadedClass = new refA(); } else if (Config.Version == "refB") { Namespace.refB LoadedClass = new refB(); } else if (Config.Version == "refC") { Namespace.refC LoadedClass = new refC(); } Run(); } private void Run(){ { LoadedClass.SomeProperty... LoadedClass.SomeMethod(){ etc... } } As you can see, I need the Loaded class to be public, so in my limited way I'm trying to change the type 'dynamically' as I load in which real class I want. Each of refA, refB and refC will implement the same properties and methods but with different names. Again, this is what I'm working with, not by my design. All that said, I tried to get my head around Interfaces (which sound like they're what I'm after) but I'm looking at them and seeing strict types - which makes sense to me, even if it's not useful to me. Any and all ideas and opinions are welcome and I'll clarify anything if necessary. Excuse any silly mistakes I've made in the terminology, I'm learning all this for the first time. I'm really enjoying working with an OOP language so far though - coming from PHP this stuff is blowing my mind :-)

    Read the article

  • casting vs using the 'as' keyword in the CLR

    - by Frank V
    I'm learning about design patterns and because of that I've ended using a lot of interfaces. One of my "goals" is to program to an interface, not an implementation. What I've found is that I'm doing a lot of casting or object type conversion. What I'd like to know is if there is a difference between these two methods of conversion: public interface IMyInterface { void AMethod(); } public class MyClass : IMyInterface { public void AMethod() { //Do work } // other helper methods.... } public class Implementation { IMyInterface _MyObj; MyClass _myCls1; MyClass _myCls2; public Implementation() { _MyObj = new MyClass(); // What is the difference here: _myCls1 = (MyClass)_MyObj; _myCls2 = (_MyObj as MyClass); } } If there is a difference, is there a cost difference or how does this affect my program? Hopefully this makes sense. Sorry for the bad example; it is all I could think of... Update: What is "in general" the preferred method? (I had a question similar to this posted in the 'answers'. I moved it up here at the suggestion of Michael Haren. Also, I want to thank everyone who's provided insight and perspective on my question.

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • Getting all the classes in project and their packages from CodeRush extension

    - by drasto
    I've spent some time trying to find a way CodeRush could add using when it finds undeclared element that is in the fact class name with no using added. The solution suggested in this answer to my question (Refactor_resolve) does not work (bugged?). In a process I found out that writing plug-ins for CodeRush is easy, so I decided to code this functionality myself (and share it). I'd only implement a CodeProvider (like in this tutorial). The only thinks I need to do the job are answers to this questions: At the start up of my plugin I need to get a list (set, map, whatever) of all classes and their packages. This means all the classes(interfaces...) and their packages in project, and within all referenced libraries. And I also need to receive updated on this (when user adds reference, creates new class). Can I get this from some CodeRush classes or maybe VS interface available from CodeProvider class? How do I add created CodeProvider to the pop-up that is shown when user hovers over an Issue?

    Read the article

  • How can I make one Maven module depend on another?

    - by Daniel Pryden
    OK, I thought I understood how to use Maven... I have a master project M which has sub-projects A, B, and C. C contains some common functionality (interfaces mainly) which is needed by A and B. I can run mvn compile jar:jar from the project root directory (the M directory) and get JAR files A.jar, B.jar, and C.jar. (The versions for all these artifacts are currently 2.0-SNAPSHOT.) The master pom.xml file in the M directory lists C under its <dependencyManagement> tag, so that A and B can reference C by just including a reference, like so: <dependency> <groupId>my.project</groupId> <artifactId>C</artifactId> </dependency> So far, so good. I can run mvn compile from the command line and everything works fine. But when I open the project in NetBeans, it complains with the problem: "Some dependency artifacts are not in the local repository", and it says the missing artifact is C. Likewise from the command line, if I change into the A or B directories and try to run mvn compile I get "Build Error: Failed to resolve artifact." I expect I could manually go to where my C.jar was built and run mvn install:install-file, but I'd rather find a solution that enables me to just work directly in NetBeans (and/or in Eclipse using m2eclipse). What am I doing wrong?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >