Search Results

Search found 1257 results on 51 pages for 'repositories'.

Page 44/51 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • Create dynamic factory method in PHP (< 5.3)

    - by fireeyedboy
    How would one typically create a dynamic factory method in PHP? By dynamic factory method, I mean a factory method that will autodiscover what objects there are to create, based on some aspect of the given argument. Preferably without registering them first with the factory either. I'm OK with having the possible objects be placed in one common place (a directory) though. I want to avoid your typical switch statement in the factory method, such as this: public static function factory( $someObject ) { $className = get_class( $someObject ); switch( $className ) { case 'Foo': return new FooRelatedObject(); break; case 'Bar': return new BarRelatedObject(); break; // etc... } } My specific case deals with the factory creating a voting repository based on the item to vote for. The items all implement a Voteable interface. Something like this: Default_User implements Voteable ... Default_Comment implements Voteable ... Default_Event implements Voteable ... Default_VoteRepositoryFactory { public static function factory( Voteable $item ) { // autodiscover what type of repository this item needs // for instance, Default_User needs a Default_VoteRepository_User // etc... return new Default_VoteRepository_OfSomeType(); } } I want to be able to drop in new Voteable items and Vote repositories for these items, without touching the implementation of the factory.

    Read the article

  • EF + UnitOfWork + SharePoint RunWithElevatedPrivileges

    - by Lorenzo
    In our SharePoint application we have used the UnitOfWork + Repository patterns together with Entity Framework. To avoid the usage of the passthrough authentication we have developed a piece of code that impersonate a single user before creating the ObjectContext instance in a similar way that is described in "Impersonating user with Entity Framework" on this site. The only difference between our code and the referred question is that, to do the impersonation, we are using RunWithElevatedPrivileges to impersonate the Application Pool identity as in the following sample. SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(url)) { _context = new MyDataContext(ConfigSingleton.GetInstance().ConnectionString); } }); We have done this way because we expected that creating the ObjectContext after impersonation and, due to the fact that Repositories are receiving the impersonated ObjectContext would solve our requirement. Unfortunately it's not so easy. In fact we experienced that, even if the ObjectContext is created before and under impersonation circumstances, the real connection is made just before executing the query, and so does not use impersonation, which break our requirement. I have checked the ObjectContext class to see if there was any event through which we can inject the impersonation but unfortunately found nothing. Any help?

    Read the article

  • TeamCity + HG. Only pull (push?) passing builds

    - by ColoradoMatt
    Feels like with the popularity of continuous integration this one should be a piece of cake but I am stumped. I am setting up TeamCity with HG. I want to be able to push changesets up to a repository that TeamCity watches and runs builds on changes. That's easy. Next, if a build passes, I want that changeset to be pulled into a "clean" repository... one that contains only passing changesets. Should be easy but... TeamCity 6 supports multiple build steps and if any step fails, the rest don't run. My thought was to put a build step at the end that does a pull (or optionally a push?) to get the passing changeset into the clean repository. I am trying to use PsExec to run hg on the box with the repositories. If I try to run just a plain 'hg pull' it can't find the hg.exe even though it is set in the path and I have used the -w flag. I have tried putting a .bat file in the clean repository that takes a revision parameter and it works fine... locally. When I try to run the .bat file remotely (using PsExec) it runs everything fine but it tries to run it on the build agent. Even if I set the -w argument it runs the .bat file there but tries to run the contents on the build agent box. Am I just WAY off in my approach? Seems like this is a pretty obvious thing to do so either my Google skills are waning or no one thinks this is worthy of writing about. Either way, I am stuck in SVN land trying to get out so I would appreciate some help!

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • svnserve not strictly required?

    - by Kev
    I was reading the Red Bean book and noticed this paragraph: Do not be seduced by the simple idea of having all of your users access a repository directly via file:// URLs. Even if the repository is readily available to everyone via a network share, this is a bad idea. It removes any layers of protection between the users and the repository: users can accidentally (or intentionally) corrupt the repository database, it becomes hard to take the repository offline for inspection or upgrade, and it can lead to a mess of file permission problems (see the section called “Supporting Multiple Repository Access Methods”). Note that this is also one of the reasons we warn against accessing repositories via svn+ssh:// URLs—from a security standpoint, it's effectively the same as local users accessing via file://, and it can entail all the same problems if the administrator isn't careful. I realized that, since I'm the only one accessing the repository, ever, none of these caveats seem to apply. Can I safely down svnserve then and only ever have to worry about upgrading my TortoiseSVN client, not both the client and the server whenever there's a new version out? (I've tried it already--just needed to use the Relocate feature to switch from svn:// to file://--but I wanted to make sure something wouldn't be sneaking up on me if I left it this way.)

    Read the article

  • How can I build the Boost.Python example on Ubuntu 9.10?

    - by Gatlin
    I am using Ubuntu 9.10 beta, whose repositories contain boost 1.38. I would like to build the hello-world example. I followed the instructions here (http://www.boost.org/doc/libs/1%5F40%5F0/libs/python/doc/tutorial/doc/html/python/hello.html), found the example project, and issued the "bjam" command. I have installed bjam and boost-build. I get the following output: Jamroot:18: in modules.load rule python-extension unknown in module Jamfile</usr/share/doc/libboost1.38-doc/examples/libs/python/example>. /usr/share/boost-build/build/project.jam:312: in load-jamfile /usr/share/boost-build/build/project.jam:68: in load /usr/share/boost-build/build/project.jam:170: in project.find /usr/share/boost-build/build-system.jam:248: in load /usr/share/boost-build/kernel/modules.jam:261: in import /usr/share/boost-build/kernel/bootstrap.jam:132: in boost-build /usr/share/doc/libboost1.38-doc/examples/libs/python/example/boost-build.jam:7: in module scope I do not know enough about Boost (this is an exploratory exercise for myself) to understand why the python-extension macro in the included Jamroot is not valid. I am running this example from the install directory, so I have not altered the Jamroot's use-project setting. As a side question, if I were to just willy-nilly start a project in an arbitrary directory, how would I write my jamroot?

    Read the article

  • Please help me with a Git workflow

    - by aaron carlino
    I'm an SVN user hoping to move to Git. I've been reading documentation and tutorials all day, and I still have unanswered questions. I don't know if this workflow will make sense, but here's my situation, and what I would like to get out of my workflow: Multiple developers, all developing locally on their work stations 3 versions of the website: Dev, Staging, Production Here's my dream: A developer works locally on his own branch, say "developer1", tests on his local machine, and commits his changes. Another developer can pull down those changes into his own branch. Merge developer1 - developer2. When the work is ready to be seen by the public, I'd like to be able to "push" to Dev, Staging, or Production. git push origin staging or maybe.. git merge developer1 staging I'm not sure. Like I said, I'm still new to it. Here are my main questions: -Do my websites (Dev, Staging, Production) have to be repositories? And do they have to be "bare" in order to be the recipients of new changes? -Do I want one repository or many, with several branches? -Does this even make sense, or am I on the wrong path? I've read a lot of tutorials, so I'm really hoping someone can just help me out with my specific situation. Thanks so much!

    Read the article

  • Understanding Domain Driven Design

    - by Nihilist
    Hi I have been trying to understand DDD for few weeks now. Its very confusing. I dont understand how I organise my projects. I have lot of questions on UnitOfWork, Repository, Associations and the list goes on... Lets take a simple example. Album and Tracks. Album: AlbumId, Name, ListOf Tracks Tracks: TrackId, Name Question1: Should i expose Tracks as a IList/IEnumerabe property on Album ? If that how do i add an album ? OR should i expose a ReadOnlyCollection of Tracks and expose a AddTrack method? Question2: How do i load Tracks for Album [assuming lazy loading]? should the getter check for null and then use a repository to load the tracks if need be? Question3: How do we organise the assemblies. Like what does each assembly have? Model.dll - does it only have the domain entities? Where do the repositories go? Interfaces and implementations both. Can i define IAlbumRepository in Model.dll? Infrastructure.dll : what shold this have? Question4: Where is unit of work defined? How do repository and unit of work communicate? [ or should they ] for example. if i need to add multiple tracks to album, again should this be defined as AddTrack on Album OR should there a method in the repository? Regardless of where the method is, how do I implement unit of work here? Question5: Should the UI use Infrastructure..dll or should there be ServiceLayer? Do my quesitons make sense? Regards

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • Ultra-Portable Laptop or Tablet PC for Development and Sketching

    - by Nelson LaQuet
    I am a software developer that primarily writes in PHP, [X]HTML, CSS, Javascript, C# and C++. I use Eclipse for web development, Visual Studio 2008 for C++ and C# work, TortoiseSVN, Subversion server for local repositories, SQL Server Express, Apache and MYSQL. I also use Office 2007 for word processing and spreadsheets and use Vista Ultimate 64 as my primary operating system. The only other things I do on my laptop are watch movies, surf the internet and listen to music. I currently have a Acer Aspire 5100 (1.4 GHz AMD Turion X2, 2 GB of RAM and a 15.4" screen). This thing does not cut it in performance or portability, and in addition, my DVD drive failed. And before anybody posts about vista: I have had XP Professional 32 on it for the last two years, and recently upgraded to Vista 64. It is actually faster (with areo disabled) then XP; so it is not the OS that is causing the laptop to be slow. I usually sketch a lot, for explaining things, developing user interfaces and software architecture. Because of my requirements, I was thinking about a Lenovo X61 Tablet PC. It outperforms my current laptop, is significantly more portable, and... is a tablet. My question is: do any other software developers use this (or other tablets) for programming? Does it help to be able to sketch on the computer itself? And is it capable of being a good development machine? Will it handle the above software listed? If not, what is the best ultra-portable laptop that is good for programming? Or are ultra-portable laptops even good for programming? I could manage with my 15.4" screen, but am spoiled by my two 19" at my home desktop and my job's workstation.

    Read the article

  • Ninject giving NullReferenceException

    - by Iceman
    I'm using asp.net MVC 2 and Ninject 2. The setup is very simple. Controller calls service that calls repository. In my controller I use inject to instantiate the service classes with no problem. But the service classes don't instantiate the repositories, giving me NullReferenceException. public class BaseController : Controller { [Inject] public IRoundService roundService { get; set; } } This works. But then this does not... public class BaseService { [Inject] public IRoundRepository roundRepository { get; set; } } Giving a NullReferenceException, when I try to use the roundRepository in my RoundService class. IList<Round> rounds = roundRepository.GetRounds( ); Module classes... public class ServiceModule : NinjectModule { public override void Load( ) { Bind( ).To( ).InRequestScope( ); } } public class RepositoryModule : NinjectModule { public override void Load( ) { Bind<IRoundRepository>( ).To<RoundRepository>( ).InRequestScope( ); } } In global.axax.cs protected override IKernel CreateKernel( ) { return new StandardKernel( new ServiceModule( ), new RepositoryModule( ) ); }

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Can't get correct package from Nexus? error in "mvn help:effective-settings"

    - by larry cai
    I use nexus opensource version maven 2.2.1 When I type "mvn help:effective-settings", i got the error below [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'help'. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.apache.maven.plugins:maven-help-plugin Reason: Error getting POM for 'org.apache.maven.plugins:maven-help-plugin' from the repository: Failed to resolve artifact, possibly due to a repository list that is not appropriately equipped for this artifact's metadata. org.apache.maven.plugins:maven-help-plugin:pom:2.2-SNAPSHOT from the specified remote repositories: Nexus (http://192.168.56.191:8081/nexus/content/groups/public) for project org.apache.maven.plugins:maven-help-plugin When I check the local repository under ~.m2\repository\org\apache\maven\plugins\maven-help-plugin It has a file maven-metadata-central.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-help-plugin</artifactId> <versioning> <latest>2.2-SNAPSHOT</latest> <release>2.1.1</release> <versions> <version>2.0</version> <version>2.0.1</version> <version>2.0.2</version> <version>2.1</version> <version>2.1.1</version> <version>2.2-SNAPSHOT</version> </versions> <lastUpdated>20100519065440</lastUpdated> </versioning> </metadata> And I can't find any jar files under directory, what's wrong with nexus server ? I can't easily find support information from nexus. Any hints

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • Should client-server code be written in one "project" or two?

    - by Ricket
    I've been beginning a client-server application. At first I naturally created two projects in Eclipse, two source control repositories, etc. But I'm quickly seeing that there is a bit of shared code between the two that would probably benefit to sharing instead of copying. In addition, I've been learning and trying test-driven development, and it seems to me that it would be easier to test based on real client components rather than having to set up a huge amount of code just to mock something, when the code is probably mostly in the client. My biggest concern in merging the client and server is of security; how do I ensure that the server pieces of the code do not reach an user's computer? So especially if you are writing client-server applications yourself (and especially in Java, though this can turn into a language-agnostic question if you'd like to share your experience with this in other languages), what sort of separation do you keep between your client and server code? Are they just in different packages/namespaces or completely different binaries using shared libraries, or something else entirely? How do you test the code together and yet ship separately?

    Read the article

  • EF Many-to-many dbset.Include in DAL on GenericRepository

    - by Bryant
    I can't get the QueryObjectGraph to add INCLUDE child tables if my life depended on it...what am I missing? Stuck for third day on something that should be simple :-/ DAL: public abstract class RepositoryBase<T> where T : class { private MyLPL2Context dataContext; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) { DatabaseFactory = databaseFactory; dbset = DataContext.Set<T>(); DataContext.Configuration.LazyLoadingEnabled = true; } protected IDatabaseFactory DatabaseFactory { get; private set; } protected MyLPL2Context DataContext { get { return dataContext ?? (dataContext = DatabaseFactory.Get()); } } public IQueryable<T> QueryObjectGraph(Expression<Func<T, bool>> filter, params string[] children) { foreach (var child in children) { dbset.Include(child); } return dbset.Where(filter); } ... DAL repositories public interface IBreed_TranslatedSqlRepository : ISqlRepository<Breed_Translated> { } public class Breed_TranslatedSqlRepository : RepositoryBase<Breed_Translated>, IBreed_TranslatedSqlRepository { public Breed_TranslatedSqlRepository(IDatabaseFactory databaseFactory) : base(databaseFactory) {} } BLL Repo: public IQueryable<Breed_Translated> QueryObjectGraph(Expression<Func<Breed_Translated, bool>> filter, params string[] children) { return _r.QueryObjectGraph(filter, children); } Controller: var breeds1 = _breedTranslatedRepository .QueryObjectGraph(b => b.Culture == culture, new string[] { "AnimalType_Breed" }) .ToList(); I can't get to Breed.AnimalType_Breed.AnimalTypeId ..I can drill as far as Breed.AnimalType_Breed then the intelisense expects an expression? Cues if any, DB Tables: bold is many-to-many Breed, Breed_Translated, AnimalType_Breed, AnimalType, ...

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • How can I implement CRUD operations in a base class for an entity framework app?

    - by hminaya
    I'm working a simple EF/MVC app and I'm trying to implement some Repositories to handle my entities. I've set up a BaseObject Class and a IBaseRepository Interface to handle the most basic operations so I don't have to repeat myself each time: public abstract class BaseObject<T> { public XA.Model.Entities.XAEntities db; public BaseObject() { db = new Entities.XAEntities(); } public BaseObject(Entities.XAEntities cont) { db = cont; } public void Delete(T entity) { db.DeleteObject(entity); db.SaveChanges(); } public void Update(T entity) { db.AcceptAllChanges(); db.SaveChanges(); } } public interface IBaseRepository<T> { void Add(T entity); T GetById(int id); IQueryable<T> GetAll(); } But then I find myself having to implement 3 basic methods in every Repository ( Add, GetById & GetAll): public class AgencyRepository : Framework.BaseObject<Agency>, Framework.IBaseRepository<Agency> { public void Add(Agency entity) { db.Companies.AddObject(entity); db.SaveChanges(); } public Agency GetById(int id) { return db.Companies.OfType<Agency>().FirstOrDefault(x => x.Id == id); } public IQueryable<Agency> GetAll() { var agn = from a in db.Companies.OfType<Agency>() select a; return agn; } } How can I get these into my BaseObject Class so I won't run in conflict with DRY.

    Read the article

  • Applying fine-grained security to an existing application

    - by Mark
    I've inherited a reasonably large and complex ASP.NET MVC3 web application using EF Code First on SQL Server. It uses ASP.NET Membership roles with database authentication. The controller actions are secured with attributes derived from AuthorizeAttribute that map roles to actions. There are extension methods for the finer points, such as showing a particular widget to particular roles. This is works great and I have a good understanding of the current security model. I've been asked to provide finer grained security at the data level. For example a 'Customer' user can only see data (throughout the database) associated with themselves and not other Customers. The problem is that 'Customer' is only 1 of 5 different types with their own specific restrictions (each of the 9 roles is one of these 5 types). The best thing I can think of is to go through all the data repositories and extend each and every LINQ statements/query with a filter for every user type. Even if I had time for that it doesn't seem like the most elegant way. Any suggestions? I really don't know where to start with this so anything could be helpful. Many thanks.

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >