Search Results

Search found 699 results on 28 pages for '2 tier'.

Page 6/28 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Rhino Mocks, Dependency Injection, and Separation of Concerns

    - by whatispunk
    I am new to mocking and dependency injection and need some guidance. My application is using a typical N-Tier architecture where the BLL references the DAL, and the UI references the BLL but not the DAL. Pretty straight forward. Lets say, for example, I have the following classes: class MyDataAccess : IMyDataAccess {} class MyBusinessLogic {} Each exists in a separate assembly. I want to mock MyDataAccess in the tests for MyBusinessLogic. So I added a constructor to the MyBusinessLogic class to take an IMyDataAccess parameter for the dependency injection. But now when I try to create an instance of MyBusinessLogic on the UI layer it requires a reference to the DAL. I thought I could define a default constructor on MyBusinessLogic to set a default IMyDataAccess implementation, but not only does this seem like a codesmell it didn't actually solve the problem. I'd still have a public constructor with IMyDataAccess in the signature. So the UI layer still requires a reference to the DAL in order to compile. One possible solution I am toying with is to create an internal constructor for MyBusinessLogic with the IMyDataAccess parameter. Then I can use an Accessor from the test project to call the constructor. But there's still that smell. What is the common solution here. I must just be doing something wrong. How could I improve the architecture?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • Webcast - September 20th at 9am PT/12pm ET - Nucleus Research Report: The Evolving Business Case for Tier 1 ERP in Midsize Companies

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Join us on September 20th at 9am PT/12pm ET for a webcast featuring Rebecca Wettemann, Vice President of Research at Nucleus Research, and Jim Lein, Senior Director at Oracle. Together, they’ll explore the recently published note, “The Evolving Business Case for Tier 1 ERP in Midsize Companies." Register today!

    Read the article

  • MVC + 3 tier; where ViewModels come into play?

    - by mikhairu
    I'm designing a 3-tiered application using ASP.NET MVC 4. I used the following resources as a reference. CodeProject: MVC + N-tier + Entity Framework Separating data access in ASP.NET MVC I have the following desingn so far. Presentation Layer (PL) (main MVC project, where M of MVC was moved to Data Access Layer): MyProjectName.Main Views/ Controllers/ ... Business Logic Layer (BLL): MyProjectName.BLL ViewModels/ ProjectServices/ ... Data Access Layer (DAL): MyProjectName.DAL Models/ Repositories.EF/ Repositories.Dapper/ ... Now, PL references BLL and BLL references DAL. This way lower layer does not depend on the one above it. In this design PL invokes a service of the BLL. PL can pass a View Model to BLL and BLL can pass a View Model back to PL. Also, BLL invokes DAL layer and DAL layer can return a Model back to BLL. BLL can in turn build a View Model and return it to PL. Up to now this pattern was working for me. However, I've ran into a problem where some of my ViewModels require joins on several entities. In the plain MVC approach, in the controller I used a LINQ query to do joins and then select new MyViewModel(){ ... }. But now, in the DAL I do not have access to where ViewModels are defined (in the BLL). This means I cannot do joins in DAL and return it to BLL. It seems I have to do separate queries in DAL (instead of joins in one query) and BLL would then use the result of these to build a ViewModel. This is very inconvenient, but I don't think I should be exposing DAL to ViewModels. Any ideas how I can solve this dilemma? Thanks.

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Arguments of using WCF/OData as access layer instead of EF/L2S/nHibernate directly

    - by Carl Hörberg
    We develop mostly low traffic but highly specialized web applications. Normally we use L2S, EF or nHibernate as access layer and then throws Asp.Net MVC to it and in which for normal crud operations we query the ISession/DataContext directly but for more advanced functions/side effects we put it in a some kind of service layer. Now, i was think about publishing the data through OData (WCF Data Service) and query that from the controllers (or even from jQuery when the a good template engine shows up) and publish the service operations through a WCF service (or as custom methods on the WCF Data Service?). What advantages/disadvantages does this architecture poses? Do I gain something except higher complexity and latency? Better separations of concerns (or is it just a illusion)?

    Read the article

  • Advice? SSO in N-tiered SOA with mixture of REST and SOAP services

    - by Tyler
    Hi gang, We are moving to SSO in our N-tiered SOA applications. If all the services were SOAP, I'd be ok with just the WS-Security, WS-Trust, WS-Federation set of protocols. My problem is that many of the services are RESTful (ironic) and those protocols do not address REST services. What is your advice for SSO protecting the REST services in an N-tiered SOA architecture with the following requirements: ideally claims-based identity information available to the REST services original user (eg. bootstrap) information must flow through the tiers so that each service can "ActAs" or "OnBehalfOf" the user support sequences like: WebApp -- REST Svc -- SOAP Svc WebApp -- REST Svc1 -- REST Svc2 WebApp -- SOAP Svc -- REST Svc WebApp -- SOAP Svc1 -- SOAP Svc2 support SSO (and SSOff) service/web app platforms: ASP.Net and WCF Java end-user client platforms: .Net (WSE 3.0 and WCF) flash 10 java javascript and AJAX Normally I'm good at climbing / bashing my way through walls, but this one's knocked me flat. Hopefully with your help, we can get over this one. Thanks, Tyler

    Read the article

  • entity framework POCO template in a n-tiers design question

    - by bryan
    HI all I was trying to follow the POCO Template walkthrough . And now I am having problems using it in n-tiers design. By following the article, I put my edmx model, and the template generated context.tt in my DAL project, and moved the generated model.tt entity classes to my Business Logic layer (BLL) project. By doing this, I could use those entities inside my BLL without referencing the DAL, I guess that is the idea of PI; without knowing anything about the data source. Now, I want to extend the entities (inside the model.tt) to perform some CUD action in the BLL project,so I added a new partial class same name as the one generated from template, public partial class Company { public static IEnumerable AllCompanies() { using(var context = new Entities()){ var q = from p in context.Companies select p; return q.ToList(); } } } however visual studio won't let me do that, and I think it was because the context.tt is in the DAL project, and the BLL project could not add a reference to the DAL project as DAL has already reference to the BLL. So I tried to added this class to the DAL and it compiled, but intelisense won't show up the BLL.Company.AllCompanies() in my web service method from my webservice project which has reference to my BLL project. What should I do now? I want to add CUD methods to the template generated entities in my BLL project, and call them in my web services from another project. I have been looking for this answer a few days already, and I really need some guides from here please. Bryan

    Read the article

  • What should be the responsibility of a presenter here?

    - by Achu
    I have a 3 layer design. (UI / BLL / DAL) UI = ASP.NET MVC In my view I have collection of products for a category. Example: Product 1, Product 2 etc.. A user able to select or remove (by selecting check box) product’s from the view, finally save as a collection when user submit these changes. With this 3 layer design how this product collection will be saved? How the filtering of products (removal and addition) to the category object? Here are my options. (A) It is the responsibility of the controller then the pseudo Code would be Find products that the user selected or removed and compare with existing records. Add or delete that collection to category object. Call SaveCategory(category); // BLL CALL Here the first 2 process steps occurs in the controller. (B) It is the responsibility of BLL then pseudo Code would be Collect products what ever user selected SaveCategory(category, products); // BLL CALL Here it's up to the SaveCategory (BLL) to decide what products should be removed and added to the database. Thanks

    Read the article

  • Time to start returning IQueryable<T> instead of IList<T> to my Web UI / Web API Layer?

    - by JohnnyO
    I've got a multi-layer application that starts with the repository pattern for all data access and it returns IQueryable to the Services layer. The Services layer, which includes all of the business logic, returns IList to the Controllers (note: I'm using ASP.NET MVC for the UI layer). The benefit of returning IQueryable in the data access layer is that it allows my repositories to be extremely simple and the database queries to be deferred. However, I'm triggering the database queries in my services layer so that my unit tests is more reliable and I don't give flexibility to the Controllers to reshape my queries. However, I've recently encountered several situations where deferring the execution of queries down to the Controllers would have been significantly more performant because the Controllers had to do some projections on the data that was UI specific. Additionally, with the emergence of things like oData, I was starting to wonder if end points (e.g. web UI or web apis) should be working directly with IQueryable. What are your thoughts? Is it time to start returning IQueryable from the services layer to the UI layer? Or stick with IList? This thread here: http://stackoverflow.com/questions/718624/to-return-iqueryablet-or-not-return-iqueryablet seems to vouch for returning IList to the UI layers, but I was wondering if things are changing because of new emerging technologies and techniques.

    Read the article

  • N-Tiered application design tool

    - by Ben V
    I'm beginning the design of a medium-sized web application. I usually like to design from the top down, i.e., start at the highest level and design my way down. I am planning to have the following layers: Presentation (PHP/Ajax) Business Logic Data Access Database Now I'd like to start sketching out the major objects in each layer and the interaction between layers. Is there a tool more specific to this purpose than just using a graphics/diagramming tool like Visio?

    Read the article

  • To Wrap or Not to Wrap: Wrapping Data Access in a Service Facade

    - by PureCognition
    For a while now, my team and I have been wrapping our data access layer in a web service facade (using WCF) and calling it from the business logic layer. Meanwhile, we could simply use the repository pattern where the business logic layer consumes the data access layer locally through an interface, and at any point in time, we can switch things out for it to hit a service instead (if necessary). The question is: When is it a good time to wrap the data access layer in a service facade and when isn't it? Right now, it seems like the main advantage is that other applications can consume the service, but if they are internal applications written in .NET then they can just consume the .NET assembly instead. Are there other advantages of having the DAL be wrapped in a service that I am unaware of?

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Entity Framework in layered architecture

    - by Kamyar
    I am using a layered architecture with the Entity Framework. Here's What I came up with till now (All the projects Except UI are class library): Entities: The POCO Entities. Completely persistence ignorant. No Reference to other projects. Generated by Microsoft's ADO.Net POCO Entity Generator. DAL: The EDMX (Entity Model) file with the context class. (t4 generated). References: Entities BLL: Business Logic Layer. Will implement repository pattern on this layer. References: Entities, DAL. This is where the objectcontext gets populated: var ctx=new DAL.MyDBEntities(); UI: The presentation layer: ASP.NET website. References: Entities, BLL + a connection string entry to entities in the config file (question #2). Now my three questions: Is my layer discintion approach correct? In my UI, I access BLL as follows: var customerRep = new BLL.CustomerRepository(); var Customer = customerRep.GetByID(myCustomerID); The problem is that I have to define the entities connection string in my UI's web.config/app.config otherwise I get a runtime exception. IS defining the entities connectionstring in UI spoils the layers' distinction? Or is it accesptible in a muli layered architecture. Should I take any additional steps to perform chage tracking, lazy loading, etc (by etc I mean the features that Entity Framework covers in a conventional, 1 project, non POCO code generation)? Thanks and apologies for the lengthy question.

    Read the article

  • How many layers is too many?

    - by Nathan
    As I have been learning about software development the last 2 years the more I learn, it seems the more gray areas I am running into. One gray area I have issues with right now is trying to decide how many layers an application should have. For example, in a WPF MVVM application what fashion of layering is ok? Is the following too separated? When I mention layering I mean creating a new class library for each layer. Presentation (View) View Model Business Layer Data Access Model Layer Utility Layer Or for a non MVVM application is this too separated? Presenation Business Data Access Model Layer Utility Layer Is acceptable to run layers together and just create folders for each layer? Any coloring of this gray area would be appreciated.

    Read the article

  • Spring MVC: should service layer be returning operation specific DTO's ?

    - by arrages
    In my Spring MVC application I am using DTO in the presentation layer in order to encapsulate the domain model in the service layer. The DTO's are being used as the spring form backing objects. hence my services look something like this: userService.storeUser(NewUserRequestDTO req); The service layer will translate DTO - Domain object and do the rest of the work. Now my problem is that when I want to retrieve a DTO from the service to perform say an Update or Display I can't seem to find a better way to do it then to have multiple methods for the lookup that return different DTO's like... EditUserRequestDTO userService.loadUserForEdit(int id); DisplayUserDTO userService.loadUserForDisplay(int id); but something does not feel right about this approach. The reason do have separate DTO's is that DisplayUserDTO is strongly typed to be read only and also there are many properties of user that are entities from a lookup table in the db (like city and state) so the DisplayUserDTO would have the string description of the properties while the EditUserRequestDTO will have the id's that will back the select drop down lists in the forms. What do you think? thanks

    Read the article

  • Entity framework using Data Repository pattern

    - by JamesStuddart
    Hi all, I have been implementing a new project which I have decided to use the repository pattern and Entity Framework. I have sucessfuly implemented basic CRUD methods and I have no moved onto my DeepLoads. From all the examples and documentation I can find to do this I need to call something like this: public Foo DeepLoadFoo() { return (from foobah in Context.Items.Include("bah").Include("foo").Include("foofoo") select foo).Single(); } This doesnt work for me, maybe I am trying to be too lazy but what I would like to achieve would be something along the lines of this: public Foo DeepLoadFoo(Foo entity, Type[] childTypes) { return (from foobah in Context.Items.Include(childTypes).Single(); } Is anything like this possible, or am I stuck with include.include.include.include? Thanks

    Read the article

  • How to model and handle presentation DTO's to abstract from complicated domain model?

    - by arrages
    Hi I am developing an application that needs to work with a complex domain model using Hibernate. This application uses Spring MVC and using the domain objects in the presentation layer is very messy so I think I should use DTO's that go to and from my service layer so that these match what I need in my views. Now lets assume I have a CarLease entity whose properties are not simple java primitives but it's composed with other entities like Make, Model, etc public class CarLease { private Make make; Private Model model; . . . } most properties are in this fashion and they are selectable using drop down selects on the jsp view, each will post back an ID to the controller. Now considering some standard use cases: create, edit, display How would you go about modeling the presentation DTO's to be used as form backing objects and communication between presentation and service layers?? Would you create a different DTO for each case (create, edit, display), would you make DTO's for the complex attributes? if so where would you translate the ID to entity? how and where would you handle validation, DTO/Domain assembly, what would you return from service layer methods? (create, edit, get) As you can see, I now I will benefit by separating my view from the domain objects (very complex with lots of stuff I don't need.) but I am having a hard time finding any real world examples and best practices for this. I need some architecture guidance from top to bottom, please keep in mind I will use Spring MVC in case that may leverage on your anwser. thanks in advance.

    Read the article

  • How can I use Tier Pricing with Configurable Products? (Magento 1.4+)

    - by Rad The Mad
    How can I use/setup Tier Pricing with Configurable Products? (Magento 1.4+) There was an extension to do this but I think it is only for Magento 1.3. Tried to setup tiers in my Simple Products, but those do not show up, or do not activate when I add to cart from my Config Product page. Any help is appreciated! Thanks. Edit: In my case, I would like to use the Tier Pricing FROM the Simple Product. and not use the Tier Pricing from the Config. Product

    Read the article

  • Unit testing with Data Access Layer

    - by chobo
    Hi, what is a good way to write unit tests with a LINQ to SQL DAL? Currently I am doing some database testing and need to create helper methods that access the database, but I don't want those methods in my main repo's. So what I have is two copies of the DAL, one in my main project and one in the Test project. Is it easier to manage these things if I create a separate project for the data layer? I'm not sure which way is a better way to approach this. If I do create a data layer project would I move all my repo's to that project as well? I'm not sure how to properly setup the layers. Thanks

    Read the article

  • Do AOP violate layered architecture for enterprise apps?

    - by redzedi
    The question(as stated in the title) comes to me as recently i was looking at Spring MVC 3.1 with annotation support and also considering DDD for an upcoming project. In the new Spring any POJO with its business methods can be annotated to act as controller, all the concerns that i would have addressed within a Controller class can be expressed exclusively through the annotations. So, technically i can take any class and wire it to act as controller , the java code is free from any controller specific code, hence the java code could deal with things like checking security , starting txn etc. So will such a class belong to Presentation or Application layer ?? Taking that argument even further , we can pull out things like security, txn mgmt and express them through annotations , thus the java code is now that of the domain object. Will that mean we have fused together the 2 layers? Please clarify

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >