Search Results

Search found 17259 results on 691 pages for 'behaviour driven design'.

Page 174/691 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • If you are forced to use an Anemic domain model, where do you put your business logic and calculated

    - by Jess
    Our current O/RM tool does not really allow for rich domain models, so we are forced to utilize anemic (DTO) entities everywhere. This has worked fine, but I continue to struggle with where to put basic object-based business logic and calculated fields. Current layers: Presentation Service Repository Data/Entity Our repository layer has most of the basic fetch/validate/save logic, although the service layer does a lot of the more complex validation & saving (since save operations also do logging, checking of permissions, etc). The problem is where to put code like this: Decimal CalculateTotal(LineItemEntity li) { return li.Quantity * li.Price; } or Decimal CalculateOrderTotal(OrderEntity order) { Decimal orderTotal = 0; foreach (LineItemEntity li in order.LineItems) { orderTotal += CalculateTotal(li); } return orderTotal; } Any thoughts?

    Read the article

  • Creating Domain Model

    - by Zai
    Hi, I have created a use case of a small application and now I have to create a Domain Model of that use cases of the application and which functions will be implemented in this application. I have no previous experience in Domain Modeling and UML, please suggest me steps to create the domain model or any suggestions, Do I have to have a very solid understanding of Object oriented concepts for creating domain model? The application is simple and creates online poll/voting system and have functions like Register Account, Confirmation Email of account, Membership, Create Poll, Send Poll etc

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • Storing Templates and Object-Oriented vs Relational Databases

    - by syrion
    I'm designing some custom blog software, and have run into a conundrum regarding database design. The software requires that there be multiple content types, each of which will require different entry forms and presentation templates. My initial instinct is to create these content types as objects, then serialize them and store them in the database as JSON or YAML, with the entry forms and templates as simple strings attached to the "contentTypes" table. This seems cumbersome, however. Are there established best practices for dealing with this design? Is this a use case where I should consider an object database? If I should be using an object database, which should I consider? I am currently working in Python and would prefer a capable Python library, but can move to Java if need be.

    Read the article

  • Deleting a resource in a Cucumber (Capybara) step doesn't work

    - by Josiah Kiehl
    Here is my Scenario: Scenario: Delete a match Given pojo is logged in And there is a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | And I am on the show match 1 page And show me the page When I follow "Delete" And I follow "Yes, delete it" Then there should not be a match with the following: | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | If I walk through these steps manually, they work. When I click the confirmation: Yes, delete it, then the match is deleted. Cucumber, however, fails to delete the record and the last step fails. And I follow "Yes, delete it" # features/step_definitions/web_steps.rb:32 Then there should not be a match with the following: # features/step_definitions/match_steps.rb:8 | game_id | 1 | | name | Game del Pojo | | date_and_time | 2010-02-23 17:52:00 | | players | 2 | | teams | 2 | | comment | This is an awesome comment | | user_id | 1 | <nil> expected but was <#<Match id: 1, name: "Game del Pojo", date_and_time: "2010-02-23 17:52:00", teams: 2, created_at: "2010-03-02 23:06:33", updated_at: "2010-03-02 23:06:33", comment: "This is an awesome comment", players: 2, game_id: 1, user_id: 1>>. (Test::Unit::AssertionFailedError) /usr/lib/ruby/1.8/test/unit/assertions.rb:48:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:495:in `_wrap_assertion' /usr/lib/ruby/1.8/test/unit/assertions.rb:46:in `assert_block' /usr/lib/ruby/1.8/test/unit/assertions.rb:83:in `assert_equal' /usr/lib/ruby/1.8/test/unit/assertions.rb:172:in `assert_nil' ./features/step_definitions/match_steps.rb:22:in `/^there should (not)? be a match with the following:$/' features/matches.feature:124:in `Then there should not be a match with the following:' Any clue how to debug this? Thanks!

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Php PEAR, Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a special-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: 1.) Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? 2.) Can PEAR interface with the MySqli prepared statements I currently use? 3.) Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) 4.) Will it slow down my site once I have a significantly large member base?

    Read the article

  • How to create windows form to display properly in all monitors?

    - by madhu bn
    I have created a windows form using (vs.net 2008 and vb.net as programming lanugage). Functionality part is working fine as expected. My issues is that, when i run my application in different machines, the form display is not proper accross the monitory screen. In some machines i noticed extra space on leftside of the container. In some other machine's it is vice versa. If you maximize or restore the form, the size varies. I tried to check the window form design on different machines using visual studio 2008, there also the form design looks differently. Please give me the solution to fix this issue. Thanks in advance.

    Read the article

  • How to decouple an app's agile development from a database using BDUF?

    - by Rob Wells
    G'day, I was reading the article "Database as a Fortress" by Dan Chak from the excellent book "97 Things Every Software Architect Should Know" (sanitised Amazon link) which suggests that databases should not be designed using an agile approach. There's an SO question on agile approaches and databases "Agile development and database changes" which has some excellent answers covering agile development approaches. In fact, one of the answers supplies a brilliant idea of what's needed for each update of the DB. ;-) But after reading Dan Chak's article, I am left wondering if an agile approach is really suitable for large scale systems. This of course leads on to the question of how best to decouple an agile approach for the application that is interacting with the BDUF database design without adding complicated translation layers in the final design employed? Any suggestions? cheers,

    Read the article

  • Sorting out POCO, Repository Pattern, Unit of Work, and ORM

    - by CoffeeAddict
    I'm reading a crapload on all these subjects: POCO Repository Pattern Unit of work Using an ORM mapper ok I see the basic definitions of each in books, etc. but I can't visualize this all together. Meaning an example structure (DL, BL, PL). So what, you have your DL objects that contain your CRUD methods, then your BL objects which are "mapped" using an ORM back to your DL objects? What about DTOs...they're your DL objects right? I'm confused. Can anyone really explain all this together or send me example code? I'm just trying to put this together. I am determining whether to go LINQ to SQL or EF 4 (not sure about NHibrernate yet). Just not getting the concepts as in physical layers and code layers here and what each type of object contains (just properties for DTOs, and CRUDs for your core DL classes that match the table fields???). I just need some guidance here. I'm reading Fowler's books and starting to read Evans but just not all there yet.

    Read the article

  • Where to handle fatal exceptions

    - by Stephen Swensen
    I am considering a design where all fatal exceptions will be handled using a custom UncaughtExceptionHandler in a Swing application. This will include unanticipated RuntimeExceptions but also custom exceptions which are thrown when critical resources are unavailable or otherwise fail (e.g. a settings file not found, or a server communication error). The UncaughtExceptionHandler will do different things depending on the specific custom exception (and one thing for all the unanticipated), but in all cases the application will show the user an error message and exit. The alternative would be to keep the UncaughtExceptionHandler for all unanticipated exceptions, but handle all other fatal scenarios close to their origin. Is the design I'm considering sound, or should I use the alternative?

    Read the article

  • Comet with ASP.NET AsyncHttpHandlers

    - by Sumit
    I am implementing a comet using AsyncHttpHandlers in my current asp.net application. According to my implementation client initially sends Notification Hook request to server (with its user id) on AsyncHttpHandler, and on server side I maintain a Global (Application level) dictionary of userid(key) and IAynsResult (value). So when ever a request is received to send notification to a user I just pick the matching IAsyncResult from the Global Dictionary and send response to the client user. My concern is, is maintaing a Dictionary of Userid and IAsyncResult at Application level a good design? I feel it will put a lot of load on the server, at the time of high traffic. Is there any other way I can achieve the comet. or what will be the good design to achieve comet for high traffic scenarios.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Some good websites to learn about JavaScript and programming architecture?

    - by Jack Roscoe
    I'm not sure if 'architecture' is the correct term, but I've been looking for some articles online which talk about programming design and more about how best to use languages such as JavaScript in a code design sense rather than the actual syntax itself. I have found many websites but a lot seem to be very out dated, and I'm not sure what developments have taken place with JavaScript over the years so do not know how old is too old. If anybody could suggest some great websites, or maybe specific articles you think would be useful, that would be highly appreciated. I am a beginner programmer currently using JavaScript with XML and of course HTML & CSS, and I'm currently trying to get further into and learn more about web development.

    Read the article

  • Database Abstraction & Factory Methods

    - by pws5068
    I'm interested in learning more about design practices in PHP for Database Abstraction & Factory methods. For background, my site is a common-interest social networking community currently in beta mode. Currently, I've started moving my old code for object retrieval to factory methods. However, I do feel like I'm limiting myself by keeping a lot of SQL table names and structure separated in each function/method. Questions: Is there a reason to use PEAR (or similar) if I dont anticipate switching databases? Can PEAR interface with the MySqli prepared statements I currently use? Will it help me separate table names from each method? (If no, what other design patterns might I want to research?) Will it slow down my site once I have a significantly large member base?

    Read the article

  • ASP.Net 4.0 Database Created Pages

    - by Tyler
    I want to create asp.net 4.0 dynamic pages loaded from my MS SQL server. Basically, its a list of locations with informations. For example: Location1 would have the page www.site.com/location/location1.aspx Location44 would have the page www.site.com/location/location44.aspx I dont even know where to start with this, url writting maybe?

    Read the article

  • C# Event Handlers Using an Enum

    - by Jimbo
    I have a StatusChanged event that is raised by my object when its status changes - however, the application needs to carry out additional actions based on what the new status is. e.g If the new status is Disconnected, then it must update the status bar text and send an email notification. So, I wanted to create an Enum with the possible statuses (Connected, Disconnected, ReceivingData, SendingData etc.) and have that sent with the EventArgs parameter of the event when it is raised (see below) Define the object: class ModemComm { public event CommanderEventHandler ModemCommEvent; public delegate void CommanderEventHandler(object source, ModemCommEventArgs e); public void Connect() { ModemCommEvent(this, new ModemCommEventArgs ModemCommEventArgs.eModemCommEvent.Connected)); } } Define the new EventArgs parameter: public class ModemCommEventArgs : EventArgs{ public enum eModemCommEvent { Idle, Connected, Disconnected, SendingData, ReceivingData } public eModemCommEvent eventType { get; set; } public string eventMessage { get; set; } public ModemCommEventArgs(eModemCommEvent eventType, string eventMessage) { this.eventMessage = eventMessage; this.eventType = eventType; } } I then create a handler for the event in the application: ModemComm comm = new ModemComm(); comm.ModemCommEvent += OnModemCommEvent; and private void OnModemCommEvent(object source, ModemCommEventArgs e) { } The problem is, I get a 'Object reference not set to an instance of an object' error when the object attempts to raise the event. Hoping someone can explain in n00b terms why and how to fix it :)

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Pagination in a Rich Domain Model

    - by user246790
    I use rich domain model in my app. The basic ideas were taken there. For example I have User and Comment entities. They are defined as following: <?php class Model_User extends Model_Abstract { public function getComments() { /** * @var Model_Mapper_Db_Comment */ $mapper = $this->getMapper(); $commentsBlob = $mapper->getUserComments($this->getId()); return new Model_Collection_Comments($commentsBlob); } } class Model_Mapper_Db_Comment extends Model_Mapper_Db_Abstract { const TABLE_NAME = 'comments'; protected $_mapperTableName = self::TABLE_NAME; public function getUserComments($user_id) { $commentsBlob = $this->_getTable()->fetchAllByUserId((int)$user_id); return $commentsBlob->toArray(); } } class Model_Comment extends Model_Abstract { } ?> Mapper's getUserComments function simply returns something like: return $this->getTable->fetchAllByUserId($user_id) which is array. fetchAllByUserId accepts $count and $offset params, but I don't know to pass them from my Controller to this function through model without rewriting all the model code. So the question is how can I organize pagination through model data (getComments). Is there a "beatiful" method to get comments from 5 to 10, not all, as getComments returns by default.

    Read the article

  • Protecting sensitive entity data

    - by Andreas
    Hi, I'm looking for some advice on architecture for a client/server solution with some peculiarities. The client is a fairly thick one, leaving the server mostly to peristence, concurrency and infrastructure concerns. The server contains a number of entities which contain both sensitive and public information. Think for example that the entities are persons, assume that social security number and name are sensitive and age is publicly viewable. When starting the client, the user is presented with a number of entities, not disclosing any sensitive information. At any time the user can choose to log in and authenticate against the server, given the authentication is successful the user is granted access to the sensitive information. The client is hosting a domain model and I was thinking of implementing this as some kind of "lazy loading", making the first request instantiating the entities and later refreshing them with sensitive data. The entity getters would throw exceptions on sensitive information when they've not been disclosed, f.e.: class PersonImpl : PersonEntity { private bool undisclosed; public override string SocialSecurityNumber { get { if (undisclosed) throw new UndisclosedDataException(); return base.SocialSecurityNumber; } } } Another more friendly approach could be to have a value object indicating that the value is undisclosed. get { if (undisclosed) return undisclosedValue; return base.SocialSecurityNumber; } Some concerns: What if the user logs in and then out, the sensitive data has been loaded but must be disclosed once again. One could argue that this type of functionality belongs within the domain and not some infrastructural implementation(i.e. repository implementations). As always when dealing with a larger number of properties there's a risk that this type of functionality clutters the code Any insights or discussion is appreciated!

    Read the article

  • Is there a linear-time performance guarantee with using an Iterator?

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>?

    Read the article

  • Question about domain models & their visibility...

    - by Another SO User
    I was involved in an interesting debate about the visibility of domain models & was wondering if people here have any good guidance. Per my understanding of MDA, we need not expose the domain model throughout the application layers & tiers The reason being that any change to the domain model has an impact in the overall application The wise thing to do would be to expose light-weight object (DTO's) which are a small sub-set of the domain model to abstract the actual model On the flip side, any change to the domain model would mean changing various DTO's throughout the application for the change to be visible, while if we do expose the domain model, then the change is in a single location Hope to see some comments & thoughts about this. Appreciate all the help!

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >