Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 91/193 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • How to manage sessions in NHibernate unit tests?

    - by Ben
    I am a little unsure as to how to manage sessions within my nunit test fixtures. In the following test fixture, I am testing a repository. My repository constructor takes in an ISession (since I will be using session per request in my web application). In my test fixture setup I configure NHibernate and build the session factory. In my test setup I create a clean SQLite database for each test executed. [TestFixture] public class SimpleRepository_Fixture { private static ISessionFactory _sessionFactory; private static Configuration _configuration; [TestFixtureSetUp] // called before any tests in fixture are executed public void TestFixtureSetUp() { _configuration = new Configuration(); _configuration.Configure(); _configuration.AddAssembly(typeof(SimpleObject).Assembly); _sessionFactory = _configuration.BuildSessionFactory(); } [SetUp] // called before each test method is called public void SetupContext() { new SchemaExport(_configuration).Execute(true, true, false); } [Test] public void Can_add_new_simpleobject() { var simpleObject = new SimpleObject() { Name = "Object 1" }; using (var session = _sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); repo.Save(simpleObject); } using (var session =_sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); var fromDb = repo.GetById(simpleObject.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(simpleObject, fromDb); Assert.AreEqual(simpleObject.Name, fromDb.Name); } } } Is this a good approach or should I be handling the sessions differently? Thanks Ben

    Read the article

  • MVC2 DataAnnotations on ViewModel - Don't understand using it with MVVM pattern

    - by ScottSEA
    I have an MVC2 Application that uses MVVM pattern. I am trying use Data Annotations to validate form input. In my ThingsController I have two methods: [HttpGet] public ActionResult Index() { return View(); } public ActionResult Details(ThingsViewModel tvm) { if (!ModelState.IsValid) return View(tvm); try { Query q = new Query(tvm.Query); ThingRepository repository = new ThingRepository(q); tvm.Things = repository.All(); return View(tvm); } catch (Exception) { return View(); } } My Details.aspx view is strongly typed to the ThingsViewModel: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Config.Web.Models.ThingsViewModel>" %> The ViewModel is a class consisting of a IList of returned Thing objects and the Query string (which is submitted on the form) and has the Required data annotation: public class ThingsViewModel { public IList<Thing> Things{ get; set; } [Required(ErrorMessage="You must enter a query")] public string Query { get; set; } } When I run this, and click the submit button on the form without entering a value I get a YSOD with the following error: The model item passed into the dictionary is of type 'Config.Web.Models.ThingsViewModel', but this dictionary requires a model item of type System.Collections.Generic.IEnumerable`1[Config.Domain.Entities.Thing]'. How can I get Data Annotations to work with a ViewModel? I cannot see what I'm missing or where I'm going wrong - the VM was working just fine before I started mucking around with validation.

    Read the article

  • Passing filtering functions to Where() in LINQ-to-SQL

    - by Daniel
    I'm trying to write a set of filtering functions that can be chained together to progressively filter a data set. What's tricky about this is that I want to be able to define the filters in a different context from that in which they'll be used. I've gotten as far as being able to pass a very basic function to the Where() clause in a LINQ statement: filters file: Func<item, bool> returnTrue = (i) => true; repository file: public IQueryable<item> getItems() { return DataContext.Items.Where(returnTrue); } This works. However, as soon as I try to use more complicated logic, the trouble begins: filters file: Func<item, bool> isAssignedToUser = (i) => i.assignedUserId == userId; repository file: public IQueryable<item> getItemsAssignedToUser(int userId) { return DataContext.Items.Where(isAssignedToUser); } This won't even build because userId isn't in the same scope as isAssignedToUser(). I've also tried declaring a function that takes the userId as a parameter: Func<item, int, bool> isAssignedToUser = (i, userId) => i.assignedUserId == userId; The problem with this is that it doesn't fit the function signature that Where() is expecting: Func<item, bool> There must be a way to do this, but I'm at a loss for how. I don't feel like I'm explaining this very well, but hopefully you get the gist. Thanks, Daniel

    Read the article

  • Push TFS 2008 code to remote VSS over VPN?

    - by drovani
    We have a local Team Foundation Server 2008 that we keep our code under version control. However, we also have a paranoid client that has their own Visual Source Safe installation that wants us to keep a running copy of the code on their server as well. As such, I'm hoping there is a way I can just do a nightly push from our TFS repository to their VSS repository. I'm not concerned about keeping each changeset on TFS as a different changeset on the VSS, just a once-nightly push that creates a new changeset on the VSS and uploads the latest changeset from TFS. I guess the first part is if it is even possible for TFS to push an update to VSS. I've noticed that most replies to this question have been something to the tune of "don't do it", but I can't find anything that specifically states that it cannot be done. The second part would then be automating the process by having the TFS server connect to the client's VPN, then push the code changes. I have full control over the TFS server and I can customize the VSS install, if there are settings that need changing, but I'm limited on what I can do about settings on either firewall or server specific settings on the client's VSS server.

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • svnsync looses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • How to setup Eclipselink with JPA?

    - by deamon
    The Eclipselink documentation says that I need the following entries in my pom.xml to get it with Maven: <dependencies> <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>eclipselink</artifactId> <version>2.0.0</version> <scope>compile</scope> ... </dependency> <dependencies> ... <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> </repository> ... </repositories> But when I try to use @Entity annotation NetBeans tells me, that the class cannot be found. And indeed: there is no Entity class in the javax.persistence package from Eclipselink. How do I have to setup Eclipselink with Maven?

    Read the article

  • .Net Entity Framework & POCO ... querying full table problem

    - by Chris Klepeis
    I'm attempting to implement a repository pattern with my poco objects auto generated from my edmx. In my repository class, I have: IObjectSet<E> _objectSet; private IObjectSet<E> objectSet { get { if (_objectSet == null) { _objectSet = this._context.CreateObjectSet<E>(); } return _objectSet; } } public IQueryable<E> GetQuery(Func<E, bool> where) { return objectSet.Where(where).AsQueryable<E>(); } public IList<E> SelectAll(Func<E, bool> where) { return GetQuery(where).ToList(); } Where E is the one of my POCO classes. When I trace the database and run this: IList<Contact> c = contactRepository.SelectAll(r => r.emailAddress == "[email protected]"); It shows up in the sql trace as a select for everything in my Contact table. Where am I going wrong here? Is there a better way to do this? Does an objectset not lazy load... so it omitted the where clause? This is the article I read which said to use objectSet's... since with POCO, I do not have EntityObject's to pass into "E" http://devtalk.dk/CommentView,guid,b5d9cad2-e155-423b-b66f-7ec287c5cb06.aspx

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • mvn deploy to AWS (ssh via distributionManagement)

    - by Dexter
    I am working on deploying the WAR file to AWS using Maven. I am planning to use 'mvn deploy' for the same which would ssh the war file to AWS. I am following http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html. This is my POM file <project> ... <distributionManagement> <repository> <id>ssh-aws</id> <url>scpexe://<ec2 instance>.compute-1.amazonaws.com</url> </repository> </distributionManagement> <build> <extensions> <!-- Enabling the use of FTP --> <extension> <groupId>org.apache.maven.wagon</groupId> <artifactId>wagon-ssh-external</artifactId> <version>1.0-beta-6</version> </extension> </extensions> </build> .. </project> This is my settings.xml <server> <id>ssh-aws</id> <username>aws-user</username> </server> The only issue is that I am unable to figure out the url in distributionManagement node of pom.xml. I am able to ssh in the AWS server by the following. ssh -i ~/pemfile/pemfile-key.pem aws-user@<ec2 instance>.compute-1.amazonaws.com But when I run mvn clean deploy, I receive this.. Exit code: 1 - Permission denied (publickey). -> [Help 1] Thanks in advance.

    Read the article

  • Replacement for Hamachi for SVN access

    - by Piers
    My company has been using Hamachi to access our SVN repository for a number of years. We are a small yet widely distributed development team with each programmer in a different country working from home. The server is hosted by a non-techie in our central office. Hamachi is useful here since it has a GUI and supports remote management. This system worked well for a while, but recently I have moved to a country with poor internet speeds. Hamachi will no longer connect 99% of the time - instead I get a "Probing..." message that doesn't resolve. It's certain to be a latency issue, as the same laptop will connect without problems when I cross the border and connect using a different ISP with better speeds. So I really need to replace Hamachi with some other VPN/protocol that handles latency better. The techie managing the repository is not comfortable installing and configuring Apache or IIS, so it looks like HTTP is out. I tried to convince my boss to go for a web hosting company, but he doesn't trust a 3rd party with our source. Any other recommended options / experiences out there for accessing our SVN repos that would be as simple as Hamachi for setup; but be more tolerant of network latency issues?

    Read the article

  • How to get a list of all Subversion commit author usernames?

    - by Quinn Taylor
    I'm looking for an efficient way to get the list of unique commit authors for an SVN repository as a whole, or for a given resource path. I haven't been able to find an SVN command specifically for this (and don't expect one) but I'm hoping there may be a better way that what I've tried so far in Terminal (on OS X): svn log --quiet | grep "^r" | awk '{print $3}' svn log --quiet --xml | grep author | sed -E "s:</?author>::g" Either of these will give me one author name per line, but they both require filtering out a fair amount of extra information. They also don't handle duplicates of the same author name, so for lots of commits by few authors, there's tons of redundancy flowing over the wire. More often than not I just want to see the unique author usernames. (It actually might be handy to infer the commit count for each author on occasion, but even in these cases it would be better if the aggregated data were sent across instead.) I'm generally working with client-only access, so svnadmin commands are less useful, but if necessary, I might be able to ask a special favor of the repository admin if strictly necessary or much more efficient. The repositories I'm working with have tens of thousands of commits and many active users, and I don't want to inconvenience anyone.

    Read the article

  • Sourcing a shell script, while running with sudo

    - by WishCow
    I would like to write a shell script that sets up a mercurial repository, and allow all users in the group "developers" to execute this script. The script is owned by the user "hg", and works fine when ran. The problem comes when I try to run it with another user, using sudo, the execution halts with a "permission denied" error, when it tries to source another file. The script file in question: create_repo.sh #!/bin/bash source colors.sh REPOROOT="/srv/repository/mercurial/" ... rest of the script .... Permissions of create_repo.sh, and colors.sh: -rwxr--r-- 1 hg hg 551 2011-01-07 10:20 colors.sh -rwxr--r-- 1 hg hg 1137 2011-01-07 11:08 create_repo.sh Sudoers setup: %developer ALL = (hg) NOPASSWD: /home/hg/scripts/create_repo.sh What I'm trying to run: user@nebu:~$ id uid=1000(user) gid=1000(user) groups=4(adm),20(dialout),24(cdrom),46(plugdev),105(lpadmin),113(sambashare),116(admin),1000(user),1001(developer) user@nebu:~$ sudo -l Matching Defaults entries for user on this host: env_reset User user may run the following commands on this host: (ALL) ALL (hg) NOPASSWD: /home/hg/scripts/create_repo.sh user@nebu:~$ sudo -u hg /home/hg/scripts/create_repo.sh /home/hg/scripts/create_repo.sh: line 3: colors.sh: Permission denied So the script is executed, but halts when it tries to include the other script. I have also tried using: user@nebu:~$ sudo -u hg /bin/bash /home/hg/scripts/create_repo.sh Which gives the same result. What is the correct way to include another shell script, if the script may be ran with a different user, through sudo?

    Read the article

  • Unit testing an MVC action method with a Cache dependency?

    - by Steve
    I’m relatively new to testing and MVC and came across a sticking point today. I’m attempting to test an action method that has a dependency on HttpContext.Current.Cache and wanted to know the best practice for achieving the “low coupling” to allow for easy testing. Here's what I've got so far... public class CacheHandler : ICacheHandler { public IList<Section3ListItem> StateList { get { return (List<Section3ListItem>)HttpContext.Current.Cache["StateList"]; } set { HttpContext.Current.Cache["StateList"] = value; } } ... I then access it like such... I'm using Castle for my IoC. public class ProfileController : ControllerBase { private readonly ISection3Repository _repository; private readonly ICacheHandler _cache; public ProfileController(ISection3Repository repository, ICacheHandler cacheHandler) { _repository = repository; _cache = cacheHandler; } [UserIdFilter] public ActionResult PersonalInfo(Guid userId) { if (_cache.StateList == null) _cache.StateList = _repository.GetLookupValues((int)ELookupKey.States).ToList(); ... Then in my unit tests I am able to mock up ICacheHandler. Would this be considered a 'best practice' and does anyone have any suggestions for other approaches? Thanks in advance. Cheers

    Read the article

  • how to allow unamed user in svn authz file?

    - by dtrosset
    I have a subversion server running with apache. It authenticates users using LDAP in apache configuration and uses SVN authorizations to limit user access to certain repositories. This works perfectly. Apache DAV svn SVNParentPath /srv/svn SVNListParentPath Off SVNPathAuthz Off AuthType Basic AuthName "Subversion Repository" AuthBasicProvider ldap AuthLDAPBindDN # private stuff AuthLDAPBindPassword # private stuff AuthLDAPURL # private stuff Require valid-user AuthzSVNAccessFile /etc/apache2/dav_svn.authz Subversion [groups] soft = me, and, all, other, developpers Adding anonymous access from one machine Now, I have a service I want to setup (rietveld, for code reviews) that needs to have an anonymous access to the repository. As this is a web service, accesses are always done from the same server. Thus I added apache configuration to allow all accesses from this machine. This did not work until I add an additional line in the authorization file to allow read access to user -. Apache <Limit GET PROPFIND OPTIONS REPORT> Order allow,deny Allow from # private IP address Satisfy Any </Limit> Subversion [Software:/] @soft = rw - = r # <-- This is the added line For instance, before I add this, all users were authenticated, and thus had a name. Now, some accesses are done without a user name! I found this - user name in the apache log files. But does this line equals to * = r that I absolutely do not want to enable, or does it only allows the anonymous unnamed user (that is allowed access only from the rietveld server)?

    Read the article

  • ASP.NEt MVC 2 application error on IIS7 works fine on local machine

    - by aspCoolguy
    My ASP.NET MVC2 application is developed using 1. VS 2010 2. Linq To SQL for Models Here is Call controller code: namespace CallTrackMVC.Controllers { public class CallController : Controller { private CallTrackRepository repository; public CallController():this(new CallTrackRepository()) { } public CallController(CallTrackRepository newRepository) { repository = newRepository; } } } Error on IIS7 when browsing the Call Create page is NullReferenceException: Object reference not set to an instance of an object.] CallTrackMVC.Models.ExecOfficeDataContext..ctor() in C:\ClearCase\rartadi_view\STS_Dev_TEST\CallTrackMVC\Models\ExecOffice.designer.cs:71 CallTrackMVC.Controllers.CallController..ctor() in C:\ClearCase\rartadi_view\STS_Dev_TEST\CallTrackMVC\Controllers\CallController.cs:16 [TargetInvocationException: Exception has been thrown by the target of an invocation.] System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandleInternal& ctor, Boolean& bNeedSecurityCheck) +0 System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean skipCheckThis, Boolean fillCache) +117 System.RuntimeType.CreateInstanceDefaultCtor(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean skipCheckThis, Boolean fillCache) +247 System.Activator.CreateInstance(Type type, Boolean nonPublic) +106 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +102 [InvalidOperationException: **An error occurred when trying to create a controller of type 'CallTrackMVC.Controllers.CallController'. Make sure that the controller has a parameterless public constructor.**] System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +541 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +85 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +165 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +80 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +389 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371 Code in Global.asax is protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); } Any suggestion would be a great help.

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • Capistrano + Git + DreamHost

    - by Michael Sync
    Hello, I'm trying to deploy my rails application by using Passenger and Capistrano on Dreamhost. I'm using Git as a version control and we bought an account from GitHub. I have installed all required gems, Passenger and Capistrano in my local machine and I have cloned the repository of my project from GitHub in my local machine as wel. According to Dreamhost support, they have Passenger, Ruby, Rails and etc on their server as well. I'm currently following this article http://github.com/guides/deploying-with-capistrano for my deployment. The following is my deploy.rb. default_run_options[:pty] = true ssh_options[:forward_agent] = true # be sure to change these set :user, 'gituser' set :domain, 'github.com' set :application, 'MyProjectOnGit' #[email protected]:MyProjectOnGit.git # the rest should be good set :repository, "[email protected]:MyProjectOnGit.git" set :deploy_to, "/ruby.michaelsync.net/" set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :git_shallow_clone, 1 set :scm_verbose, true set :use_sudo, false set :git_enable_submodules, 1 server domain, :app, :web role :db, domain, :primary => true set :ssh_options, { :forward_agent => true } namespace :deploy do task :restart do run "touch #{current_path}/tmp/restart.txt" end end When I run "cap deploy", I'm getting the error below. [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) connection failed for: github.com (Net::SSH::AuthenticationFailed: gituser) Thanks in advance..

    Read the article

  • What is a reasonable OSGi development workflow?

    - by levand
    I'm using OSGi for my latest project at work, and it's pretty beautiful as far as modularity and functionality. But I'm not happy with the development workflow. Eventually, I plan to have 30-50 separate bundles, arranged in a dependency graph - supposedly, this is what OSGi is designed for. But I can't figure out a clean way to manage dependencies at compile time. Example: You have bundles A and B. B depends on packages defined in A. Each bundle is developed as a separate Java project. In order to compile B, A has to be on the javac classpath. Do you: Reference the file system location of project A in B's build script? Build A and throw the jar into B's lib directory? Rely on Eclipse's "referenced projects" feature and always use Eclipse's classpath to build (ugh) Use a common "lib" directory for all projects and dump the bundle jars there after compilation? Set up a bundle repository, parse the manifest from the build script and pull down the required bundles from the repository? No. 5 sounds the cleanest, but also like a lot of overhead.

    Read the article

  • What are some commonly used source code check-in policies?

    - by rwmnau
    I'm curious what code review policies other development shops apply to their source code when it's checked into the source control repository. I'm setting up a TFS (Team Foundation) server, and I'd like to apply some check-in policies to start to stamp out bad practices. For example, I was thinking of starting with the following couple, so this is the kind of stuff I'm looking for: Prohibit empty "Catch" blocks. This would prevent applications from swallowing any exceptions without at least requiring a comment explaining why it's not necessary to do anything with the exception. Prohibit "Catch ex as Exception" generic exception handling. Instead, require code to catch specific types of exceptions and deal with them appropriately, instead of just building catch-all handling. Require a check-in comment. This one should be self-explanatory, though it seems that TFS (and most other source-control systems) don't require a comment by default. While these are just examples, they're where I'm thinking of starting, and while I'd like some additional examples of what's popular, I'm open to feedback on these. Also, though we're a mostly .NET shop, I imagine the popular policies are universal across languages and IDEs (we have some Java development and a few people who will use the repository develop with Eclipse).

    Read the article

  • How can I rewrite the history of a published git branch in multiple steps?

    - by Frerich Raabe
    I've got a git repository with two branches, master and amazing_new_feature. The latter branch contains the work on, well, an amazing new feature. A colleague and me are both working on the same repository, and the two of us commit to both branches. Now the work on the amazing new feature finished, and a bit more than 100 commits were accumulated in the amazing_new_feature branch. I'd like to clean those commits up a bit (using git rebase -i) before merging the work into master. The issue we're facing is that it's quite a pain to rewrite/reorder all 100 commits in one go. Instead, what I'd like to do is: Rewrite/merge/reorder the first few commits in the amazing_new_feature branch and put the result into a dedicated branch which contains the 'cleaned up' history (say, a amazing_new_feature_ready_for_merge branch). Rebase the remaining amazing_new_feature branch on the amazing_new_feature_ready_for_merge branch. Repeat at 1. My idea is that at some point, all the work from amazing_new_feature should be in amazing_new_feature_ready_for_merge and then I can merge the latter into master. Is this a sensible approach, or are there better/easier/more fool-proff solutions to this problem? I'm especially scared about the second step of the above algorithm since it means rebasing a published branch. IIRC it's a dangerous thing to do.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >