Search Results

Search found 44734 results on 1790 pages for 'model based design'.

Page 348/1790 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • .Net Template Engine/Report Solution

    - by runxc1 Bret Ferrier
    I am looking to add custom reports/forms to a web application. I want users to be able to upload a report definition/template file and then be able to print out a PDF or word document (one or the other it doesn't need to be both) for each of their widgets based off of the template they uploaded. I can't install anything on the server and am looking for an open source/free solution. Data Source- The data will be in the form of a datatable or dataset that the application fetches itself. The report tool does will not be able to connect to any database.

    Read the article

  • Creating DB views in Ruby on Rails

    - by Zigu
    Hey guys, I'm building a "reports" functionality to a project. 3 roles here: 1) Volunteers (they report what hours they volunteered) 2) Supervisors (they look at the reported stuff, note: one supervisor can view all projects) 3) Projects (Represents a work project that some collection of volunteers work on) To explain what it does: A report will be specified by the supervisor to generate based on a query of what he needs. These could be plausible reports: 1) The total number of volunteers, and the total amount of volunteer hours on this project 2) All the volunteer's names and emails associated with a project 3) The number of active projects vs. the total number of projects I was thinking maybe that creating a view in rails and storing the name of that view so Rails will just check the view whenever a supervisor wants to pull up the "report". Is a view really the answer or is it better to just save a query? Can Rails do this or is there an even better or more simple way of achieving this functionality? Cheers, -Jeremiah Tantongco

    Read the article

  • 4GB limitation on these embedded/express DBs good enough? what's next if limitation is reached?

    - by edwin.nathaniel
    I'm wondering how long a (theoretically) desktop-app can consume the full 4GB limitation of these express/embedded database products (SQL-Server Express, Oracle Express, SQLite3, etc) provided that big blobs will be stored in filesystem. Also what would be your strategy when it hits the 4GB? Archive the old DB Copy 1-3 months of data to the new DB (consider this as cache strategy?) Start using the new DB from this point onward (How do you access the old data?) I understand that the answer might varies depending on how much data you stored in the table/column. But please describe based on your experience (what kind of desktop-app, write/read heavy, how long will it reach according to your guess).

    Read the article

  • PHP Booking timeslot

    - by boyee007
    Im developing a php booking system based on timeslot for daily basis. Ive set up 4 database tables! Bookslot (which store all the ids - id_bookslot, id_user, id_timeslot) Timeslot (store all the times on 15 minutes gap ex: 09:00, 09:15, 09:30, etc) Therapist (store all therapist details) User (store all the members detail) ID_BOOKSLOT ID_USER ID_THERAPIST ID_TIMESLOT 1 10 1 1 (09:00) 2 11 2 1 (09:00) 3 12 3 2 (09:15) 4 15 3 1 (09:00) Now, my issue is, it keep showing repeation for timeslot when i want echoing the data for example: thera a thera b thera c ------------------------------------------------- 09:00 BOOKED available available 09:00 available BOOKED available 09:00 available available BOOKED 09:15 available BOOKED available as you can see, 09:00 showing three times, and i want something like below thera a thera b thera c ------------------------------------------------- 09:00 BOOKED BOOKED BOOKED 09:15 available BOOKED available There might be something wrong with joining the table or else. The code to join the table $mysqli->query("SELECT * FROM bookslot RIGHT JOIN timeslot ON bookslot.id_timeslot = timeslot.id_timeslot LEFT JOIN therapist ON bookslot.id_therapist = therapist.id_therapist" if anyone have the solution for this system, please help me out and i appriciate it much!

    Read the article

  • how to tackle this combinatorial algorithm problem

    - by Andrew Bullock
    I have N people who must each take T exams. Each exam takes "some" time, e.g. 30 min (no such thing as finishing early). Exams must be performed in front of an examiner. I need to schedule each person to take each exam in front of an examiner within an overall time period, using the minimum number of examiners for the minimum amount of time (i.e. no examiners idle) There are the following restrictions: No person can be in 2 places at once each person must take each exam once noone should be examined by the same examiner twice I realise that an optimal solution is probably NP-Complete, and that I'm probably best off using a genetic algorithm to obtain a best estimate (similar to this? http://stackoverflow.com/questions/184195/seating-plan-software-recommendations-does-such-a-beast-even-exist). I'm comfortable with how genetic algorithms work, what i'm struggling with is how to model the problem programatically such that i CAN manipulate the parameters genetically.. If each exam took the same amount of time, then i'd divide the time period up into these lengths, and simply create a matrix of time slots vs examiners and drop the candidates in. However because the times of each test are not necessarily the same, i'm a bit lost on how to approach this. currently im doing this: make a list of all "tests" which need to take place, between every candidate and exam start with as many examiners as there are tests repeatedly loop over all examiners, for each one: find an unscheduled test which is eligible for the examiner (based on the restrictions) continue until all tests that can be scheduled, are if there are any unscheduled tests, increment the number of examiners and start again. i'm looking for better suggestions on how to approach this, as it feels rather crude currently.

    Read the article

  • How to Link VS2010 Database Project and LINQ to SQL

    - by Jason
    As I am working with the new database projects in VS2010, and as I am learning LINQ to SQL, I am curious as to the best way to link the two groups of information so that when I update one, the other updates along with it. From my research here at SO, as well as in Google, it appears the general rule of thumb is: "Build the database, and then create your LINQ to SQL classes." Of course, if I make a change in my database, the LINQ to SQL doesn't update automatically and I have to do it by hand. This is fairly simple right now as my database is small, but I am curious if there is an easier way for this to happen. In addition, the LINQ to SQL tool is pretty nice. The ability to create tables, add associations, and even create inheritance is very simple. As my second question, I am curious as to whether or not VS2010 can work the other way - I design the database in the DBLM file and then link it back to my database project. I appreciate any help with either of these two questions. I'm really interested in making this as easy as possible to reduce errors during development and improve the speed at which changes can be made.

    Read the article

  • ReSharper - Possible Null Assignment when using Microsoft.Contracts

    - by HVS
    Is there any way to indicate to ReSharper that a null reference won't occur because of Design-by-Contract Requires checking? For example, the following code will raise the warning (Possible 'null' assignment to entity marked with 'NotNull' attribute) in ReSharper on lines 7 and 8: private Dictionary<string, string> _Lookup = new Dictionary<string, string>(); public void Foo(string s) { Contract.Requires(!String.IsNullOrEmpty(s)); if (_Lookup.ContainsKey(s)) _Lookup.Remove(s); } What is really odd is that if you remove the Contract.Requires(...) line, the ReSharper message goes away. Update I found the solution through ExternalAnnotations which was also mentioned by Mike below. Here's an example of how to do it for a function in Microsoft.Contracts: Create a directory called Microsoft.Contracts under the ExternalAnnotations ReSharper directory. Next, Create a file called Microsoft.Contracts.xml and populate like so: <assembly name="Microsoft.Contracts"> <member name="M:System.Diagnostics.Contracts.Contract.Requires(System.Boolean)"> <attribute ctor="M:JetBrains.Annotations.AssertionMethodAttribute.#ctor"/> <parameter name="condition"> <attribute ctor="M:JetBrains.Annotations.AssertionConditionAttribute.#ctor(JetBrains.Annotations.AssertionConditionType)"> <argument>0</argument> </attribute> </parameter> </member> </assembly> Restart Visual Studio, and the message goes away!

    Read the article

  • Legit? Two foreign keys referencing the same primary key.

    - by Ryan
    Hi All, I'm a web developer and have recently started a project with a company. Currently, I'm working with their DBA on getting the schema laid out for the site, and we've come to a disagreement regarding the design on a couple tables, and I'd like some opinions on the matter. Basically, we are working on a site that will implement a "friends" network. All users of the site will be contained in a table tblUsers with (PersonID int identity PK, etc). What I am wanting to do is to create a second table, tblNetwork, that will hold all of the relationships between users, with (NetworkID int identity PK, Owners_PersonID int FK, Friends_PersonID int FK, etc). Or conversely, remove the NetworkID, and have both the Owners_PersonID and Friends_PersonID shared as the Primary key. This is where the DBA has his problem. Saying that "he would only implement this kind of architecture in a data warehousing schema, and not for a website, and this is just another example of web developers trying to take the easy way out." Now obviously, his remark was a bit inflammatory, and that have helped motivate me to find an suitable answer, but more so, I'd just like to know how to do it right. I've been developing databases and programming for over 10 years, have worked with some top-notch minds, and have never heard this kind of argument. What the DBA is wanting to do is instead of storing both the Owners_PersonId and Friends_PersonId in the same table, is to create a third table tblFriends to store the Friends_PersonId, and have the tblNetwork have (NetworkID int identity PK, Owner_PersonID int FK, FriendsID int FK(from TBLFriends)). All that tblFriends would house would be (FriendsID int identity PK, Friends_PersonID(related back to Persons)). To me, creating the third table is just excessive in nature, and does nothing but create an alias for the Friends_PersonID, and cause me to have to add (what I view as unneeded) joins to all my queries, not to mention the extra cycles that will be necessary to perform the join on every query. Thanks for reading, appreciate comments. Ryan

    Read the article

  • Invoice & Invoice lines: How do you store customer address information?

    - by elviejo
    Hi I'm developing an invoicing application. So the general idea is to have two tables: Invoice (ID, Date, CustomerAddress, CustomerState, CustomerCountry, VAT, Total); InvoiceLine (Invoice_ID, ID, Concept, Units, PricePerUnit, Total); As you can see this basic design leads to a lot of repetiton of records where the client will have the same addrres, state and country. So the alternative is to have an address table and then make a relationship Address<-Invoice. However I think that an invoice is immutable document and should be stored just the way it was first made. Sometimes customers change their addresses, or states and if it was coming from an Address catalog that will change all the previously made invoices. So What is your experience? How is the customer address stored in an invoice? In the Invoice table? an Address Table? or something else? Can you provide pointers to a book, article or document where this is discussed in further detail?

    Read the article

  • DDD: Aggregate Roots

    - by Mosh
    Hello, I need help with finding my aggregate root and boundary. I have 3 Entities: Plan, PlannedRole and PlannedTraining. Each Plan can include many PlannedRoles and PlannedTrainings. Solution 1: At first I thought Plan is the aggregate root because PlannedRole and PlannedTraining do not make sense out of the context of a Plan. They are always within a plan. Also, we have a business rule that says each Plan can have a maximum of 3 PlannedRoles and 5 PlannedTrainings. So I thought by nominating the Plan as the aggregate root, I can enforce this invariant. However, we have a Search page where the user searches for Plans. The results shows a few properties of the Plan itself (and none of its PlannedRoles or PlannedTrainings). I thought if I have to load the entire aggregate, it would have a lot of overhead. There are nearly 3000 plans and each may have a few children. Loading all these objects together and then ignoring PlannedRoles and PlannedTrainings in the search page doesn't make sense to me. Solution 2: I just realized the user wants 2 more search pages where they can search for Planned Roles or Planned Trainings. That made me realize they are trying to access these objects independently and "out of" the context of Plan. So I thought I was wrong about my initial design and that is how I came up with this solution. So, I thought to have 3 aggregates here, 1 for each Entity. This approach enables me to search for each Entity independently and also resolves the performance issue in solution 1. However, using this approach I cannot enforce the invariant I mentioned earlier. There is also another invariant that states a Plan can be changed only if it is of a certain status. So, I shouldn't be able to add any PlannedRoles or PlannedTrainings to a Plan that is not in that status. Again, I can't enforce this invariant with the second approach. Any advice would be greatly appreciated. Cheers, Mosh

    Read the article

  • Modeling a Generic Relationship in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType". This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Machine Learning Algorithm for Predicting Order of Events?

    - by user213060
    Simple machine learning question. Probably numerous ways to solve this: There is an infinite stream of 4 possible events: 'event_1', 'event_2', 'event_4', 'event_4' The events do not come in in completely random order. We will assume that there are some complex patterns to the order that most events come in, and the rest of the events are just random. We do not know the patterns ahead of time though. After each event is received, I want to predict what the next event will be based on the order that events have come in in the past. The predictor will then be told what the next event actually was: Predictor=new_predictor() prev_event=False while True: event=get_event() if prev_event is not False: Predictor.last_event_was(prev_event) predicted_event=Predictor.predict_next_event(event) The question arises of how long of a history that the predictor should maintain, since maintaining infinite history will not be possible. I'll leave this up to you to answer. The answer can't be infinte though for practicality. So I believe that the predictions will have to be done with some kind of rolling history. Adding a new event and expiring an old event should therefore be rather efficient, and not require rebuilding the entire predictor model, for example. Specific code, instead of research papers, would add for me immense value to your responses. Python or C libraries are nice, but anything will do. Thanks! Update: And what if more than one event can happen simultaneously on each round. Does that change the solution?

    Read the article

  • Business Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The result has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • Taking the data mapper approach in Zend Framework

    - by Seeker
    Let's assume the following tables setup for a Zend Framework app. user (id) groups (id) groups_users (id, user_id, group_id, join_date) I took the Data Mapper approach to models which basically gives me: Model_User, Model_UsersMapper, Model_DbTable_Users Model_Group, Model_GroupsMapper, Model_DbTable_Groups Model_GroupUser, Model_GroupsUsersMapper, Model_DbTable_GroupsUsers (for holding the relationships which can be seen as aentities; notice the "join_date" property) I'm defining the _referenceMap in Model_DbTable_GroupsUsers: protected $_referenceMap = array ( 'User' => array ( 'columns' => array('user_id'), 'refTableClass' => 'Model_DbTable_Users', 'refColumns' => array('id') ), 'App' => array ( 'columns' => array('group_id'), 'refTableClass' => 'Model_DbTable_Groups', 'refColumns' => array('id') ) ); I'm having these design problems in mind: 1) The Model_Group only mirrors the fields in the groups table. How can I return a collection of groups a user is a member of and also the date the user joined that group for every group? If I just added the property to the domain object, then I'd have to let the group mapper know about it, wouldn't I? 2) Let's say I need to fetch the groups a user belongs to. Where should I put this logic? Model_UsersMapper or Model_GroupsUsersMapper? I also want to make use of the referencing map (dependent tables) mechanism and probably use findManyToManyRowset or findDependentRowset, something like: $result = $this->getDbTable()->find($userId); $row = $result->current(); $groups = $row->findManyToManyRowset( 'Model_DbTable_Groups', 'Model_DbTable_GroupsUsers' ); This would produce two queries when I could have just written it in a single query. I will place this in the Model_GroupsUsersMapper class. An enhancement would be to add a getGroups method to the Model_User domain object which lazily loads the groups when needed by calling the appropriate method in the data mapper, which begs for the second question. Should I allow the domain object know about the data mapper?

    Read the article

  • structured vs. unstructured data in db

    - by Igor
    the question is one of design. i'm gathering a big chunk of performance data with lots of key-value pairs. pretty much everything in /proc/cpuinfo, /proc/meminfo/, /proc/loadavg, plus a bunch of other stuff, from several hundred hosts. right now, i just need to display the latest chunk of data in my UI. i will probably end up doing some analysis of the data gathered to figure out performance problems down the road, but this is a new application so i'm not sure what exactly i'm looking for performance-wise just yet. i could structure the data in the db -- have a column for each key i'm gathering. the table would end up being O(100) columns wide, it would be a pain to put into the db, i would have to add new columns if i start gathering a new stat. but it would be easy to sort/analyze the data just using SQL. or i could just dump my unstructured data blob into the table. maybe three columns -- host id, timestamp, and a serialized version of my array, probably using JSON in a TEXT field. which should I do? am i going to be sorry if i go with the unstructured approach? when doing analysis, should i just convert the fields i'm interested in and create a new, more structured table? what are the trade-offs i'm missing here?

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Extend base type and automatically update audit information on Entity

    - by Nix
    I have an entity model that has audit information on every table (50+ tables) CreateDate CreateUser UpdateDate UpdateUser Currently we are programatically updating audit information. Ex: if(changed){ entity.UpdatedOn = DateTime.Now; entity.UpdatedBy = Environment.UserName; context.SaveChanges(); } But I am looking for a more automated solution. During save changes, if an entity is created/updated I would like to automatically update these fields before sending them to the database for storage. Any suggestion on how i could do this? I would prefer to not do any reflection, so using a text template is not out of the question. A solution has been proposed to override SaveChanges and do it there, but in order to achieve this i would either have to use reflection (in which I don't want to do ) or derive a base class. Assuming i go down this route how would I achieve this? For example EXAMPLE_DB_TABLE CODE NAME --Audit Tables CREATE_DATE CREATE_USER UPDATE_DATE UPDATE_USER And if i create a base class public abstract class IUpdatable{ public virtual DateTime CreateDate {set;} public virtual string CreateUser { set;} public virtual DateTime UpdateDate { set;} public virtual string UpdateUser { set;} } The end goal is to be able to do something like... public overrride void SaveChanges(){ //Go through state manager and update audit infromation //FOREACH changed entity in state manager if(entity is IUpdatable){ //If state is created... update create audit. //if state is updated... update update audit } } But I am not sure how I go about generating the code that would extend the interface.

    Read the article

  • fluent nHibernate mapping of subclassed structure

    - by Codezy
    I have a workflow class that has a collection of phases, each phase has a collection of tasks. You can design a workflow that will be used by many engagements. When used in engagement I want to be able to add properties to each class (workflow, phase, and task). For example a task in the designer does not have people assigned, but a task in an engagement would need extra properties like who is assigned to it. I have tried many different approaches using subclasses or interfaces but I just can't get it to map the way I want. Currently I have the engagement level versions as subclasses, but I can't get Engagement phases to map to engagement workflows. Public Class WorkflowMapping Inherits ClassMap(Of Workflow) Sub New() Id(Function(x As Workflow) x.Id).Column("Workflow_Id").GeneratedBy.Identity() Map(Function(x As Workflow) x.Description) Map(Function(x As Workflow) x.Generation) Map(Function(x As Workflow) x.IsActive) HasMany(Function(x As Workflow) x.Phases).Cascade.All() End Sub End Class Public Class EngagementWorkflowMapping Inherits SubclassMap(Of EngagementWorkflow) Sub New() Map(Function(x As EngagementWorkflow) x.ClientNo) Map(Function(x As EngagementWorkflow) x.EngagementNo) End Sub End Class How would you approach mapping this in fluent (or hbm) so that you could load just the workflow base class when designing the flow, or the engagement subclass versions of each when being used by an engagement?

    Read the article

  • Flexible Decorator Pattern?

    - by Omar Kooheji
    I was looking for a pattern to model something I'm thinking of doing in a personal project and I was wondering if a modified version of the decorator patter would work. Basicly I'm thinking of creating a game where the characters attributes are modified by what items they have equiped. The way that the decorator stacks it's modifications is perfect for this, however I've never seen a decorator that allows you to drop intermediate decorators, which is what would happen when items are unequiped. Does anyone have experience using the decorator pattern in this way? Or am I barking up the wrong tree? Clarification To explain "Intermediate decorators" if for example my base class is coffe which is decorated with milk which is decorated with sugar (using the example in Head first design patterns) milk would be an intermediate decorator as it decorates the base coffee, and is decorated by the sugar. Yet More Clarification :) The idea is that items change stats, I'd agree that I am shoehorning the decorator into this. I'll look into the state bag. essentially I want a single point of call for the statistics and for them to go up/down when items are equiped/unequiped. I could just apply the modifiers to the characters stats on equiping and roll them back when unequiping. Or whenever a stat is asked for iterate through all the items and calculate the stat. I'm just looking for feedback here, I'm aware that I might be using a chainsaw where scissors would be more appropriate...

    Read the article

  • Generics with constraints hierarchy

    - by devoured elysium
    I am currently facing a very disturbing problem: interface IStateSpace<Position, Value> where Position : IPosition // <-- Problem starts here where Value : IValue // <-- and here as I don't { // know how to get away this // circular dependency! // Notice how I should be // defining generics parameters // here but I can't! Value GetStateAt(Position position); void SetStateAt(Position position, State state); } As you'll down here, both IPosition, IValue and IState depend on each other. How am I supposed to get away with this? I can't think of any other design that will circumvent this circular dependency and still describes exactly what I want to do! interface IState<StateSpace, Value> where StateSpace : IStateSpace where Value : IValue { StateSpace StateSpace { get; }; Value Value { get; set; } } interface IPosition { } interface IValue<State> where State : IState { State State { get; } } Basically I have a state space IStateSpace that has states IState inside. Their position in the state space is given by an IPosition. Each state then has one (or more) values IValue. I am simplifying the hierarchy, as it's a bit more complex than described. The idea of having this hierarchy defined with generics is to allow for different implementations of the same concepts (an IStateSpace will be implemented both as a matrix as an graph, etc). Would can I get away with this? How do you generally solve this kind of problems? Which kind of designs are used in these cases? Thanks

    Read the article

  • [Smalltalk] Store List of Instruction

    - by Luciano Lorenti
    Hi all, I have a design Problem. i have a Drawer class wich invokes a serie of methods of a kind-of-brush class and i have a predefined shapes which i want to draw. Each shape uses a list of instance methods from the drawer. I can have more than 1 brush object. I want to add custom shapes on runtime in the drawer instance, especifying the list of methods of the new shape. i've created a class method for every predefined shape that returns a BlockClosure with the instruccions. Obviously i have to give to each BlockClosure the brush object as parameter. I imagine a collection with all the BlockClosures in each instance of the Drawer Class. Maybe i can inherit a SequenceableCollection and make a instruccion collection. Each element of the collection it's a instruction and i give the brush object when i instance this new collection. I really don't know the best way to store these steps. (Maybe a shared variable?)

    Read the article

  • What is the equivalent of .NET events in Ruby?

    - by Gishu
    The problem is very simple. An object needs to notify some events that might be of interest to observers. When I sat to validate a design that I cooked up now in Ruby just to validate it.. I find myself stumped as to how to implement the object events. In .Net this would be a one-liner.. .Net also does handler method signature verification,etc. e.g. // Object with events public delegate void HandlerSignature(int a); public event HandlerSignature MyEvent; public event HandlerSignature AnotherCriticalEvent; // Client MyObject.MyEvent += new HandlerSignature(MyHandlerMethod); // MyHandlerMethod has same signature as delegate Is there an EventDispatcher module or something that I am missing that I can strap on to a Ruby class ? Hoping for an answer that plays along with Ruby's principle of least surprise. An event would be the name of the event plus a queue of [observer, methodName] objects that need to be invoked when the event takes place.

    Read the article

  • Can the Diamond Problem be really solved?

    - by Mecki
    A typical problem in OO programming is the diamond problem. I have parent class A with two sub-classes B and C. A has an abstract method, B and C implement it. Now I have a sub-class D, that inherits of B and C. The diamond problem is now, what implementation shall D use, the one of B or the one of C? People claim Java knows no diamond problem. I can only have multiple inheritance with interfaces and since they have no implementation, I have no diamond problem. Is this really true? I don't think so. See below: [removed vehicle example] Is a diamond problem always the cause of bad class design and something neither programmer nor compiler needs to solve, because it shouldn't exist in the first place? Update: Maybe my example was poorly chosen. See this image Of course you can make Person virtual in C++ and thus you will only have one instance of person in memory, but the real problem persists IMHO. How would you implement getDepartment() for GradTeachingFellow? Consider, he might be student in one department and teach in another one. So you can either return one department or the other one; there is no perfect solution to the problem and the fact that no implementation might be inherited (e.g. Student and Teacher could both be interfaces) doesn't seem to solve the problem to me.

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >