Search Results

Search found 14199 results on 568 pages for 'dirty bird design'.

Page 45/568 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • C++ design related question

    - by Kotti
    Hi! Here is the question's plot: suppose I have some abstract classes for objects, let's call it Object. It's definition would include 2D position and dimensions. Let it also have some virtual void Render(Backend& backend) const = 0 method used for rendering. Now I specialize my inheritance tree and add Rectangle and Ellipse class. Guess they won't have their own properties, but they will have their own virtual void Render method. Let's say I implemented these methods, so that Render for Rectangle actually draws some rectangle, and the same for ellipse. Now, I add some object called Plane, which is defined as class Plane : public Rectangle and has a private member of std::vector<Object*> plane_objects; Right after that I add a method to add some object to my plane. And here comes the question. If I design this method as void AddObject(Object& object) I would face trouble like I won't be able to call virtual functions, because I would have to do something like plane_objects.push_back(new Object(object)); and this should be push_back(new Rectangle(object)) for rectangles and new Circle(...) for circles. If I implement this method as void AddObject(Object* object), it looks good, but then somewhere else this means making call like plane.AddObject(new Rectangle(params)); and this is generally a mess because then it's not clear which part of my program should free the allocated memory. ["when destroying the plane? why? are we sure that calls to AddObject were only done as AddObject(new something).] I guess the problems caused by using the second approach could be solved using smart pointers, but I am sure there have to be something better. Any ideas?

    Read the article

  • What design pattern to use for one big method calling many private methods

    - by Jeune
    I have a class that has a big method that calls on a lot of private methods. I think I want to extract those private methods into their own classes for one because they contain business logic and I think they should be public so they can be unit tested. Here's a sample of the code: public void handleRow(Object arg0) { if (continueRunning){ hashData=(HashMap<String, Object>)arg0; Long stdReportId = null; Date effDate=null; if (stdReportIds!=null){ stdReportId = stdReportIds[index]; } if (effDates!=null){ effDate = effDates[index]; } initAndPutPriceBrackets(hashData, stdReportId, effDate); putBrand(hashData,stdReportId,formHandlerFor==0?true:useLiveForFirst); putMultiLangDescriptions(hashData,stdReportId); index++; if (stdReportIds!=null && stdReportIds[0].equals(stdReportIds[1])){ continueRunning=false; } if (formHandlerFor==REPORTS){ putBeginDate(hashData,effDate,custId); } //handle logic that is related to pricemaps. lstOfData.add(hashData); } } What design pattern should I apply to this problem?

    Read the article

  • Design an Application That Stores and Processes Files

    - by phasetwenty
    I'm tasked with writing an application that acts as a central storage point for files (usually document formats) as provided by other applications. It also needs to take commands like "file 395 needs a copy in X format", at which point some work is offloaded to a 3rd party application. I'm having trouble coming up with a strategy for this. I'd like to keep the design as simple as possible, so I'd like to avoid big extra frameworks or techniques like threads for as long as it makes sense. The clients are expected to be web applications (for example, one is a django application that receives files from our customers; the others are not yet implemented). The platform it will be running on is likely going to be Python on Linux, unless I have a strong argument to use something else. In the beginning I thought I could fit the information I wanted to communicate in the filenames, and let my application parse the filename to figure out what it needed to do, but this is proving too inflexible with the amount of information I'm realizing I need to make available. Another idea is to pair FTP with a database used as a communication medium (client uploads a file and updates the database with a command as a row in a table) but I don't like this idea because adding commands (a known change) looks like it will require adding code as well as changing database schemas. It will also muddy up the interface my clients will have to use. I looked into Pyro to let applications communicate more directly but I don't like the idea of running an extra nameserver for this one purpose. I also don't see a good way to do file transfer within this framework. What I'm looking for is techniques and/or technologies applicable to my problem. At the simplest level, I need the ability to accept files and messages with them.

    Read the article

  • Design by contracts and constructors

    - by devoured elysium
    I am implementing my own ArrayList for school purposes, but to spice up things a bit I'm trying to use C# 4.0 Code Contracts. All was fine until I needed to add Contracts to the constructors. Should I add Contract.Ensures() in the empty parameter constructor? public ArrayList(int capacity) { Contract.Requires(capacity > 0); Contract.Ensures(Size == capacity); _array = new T[capacity]; } public ArrayList() : this(32) { Contract.Ensures(Size == 32); } I'd say yes, each method should have a well defined contract. On the other hand, why put it if it's just delegating work to the "main" constructor? Logicwise, I wouldn't need to. The only point I see where it'd be useful to explicitly define the contract in both constructors is if in the future we have Intelisense support for contracts. Would that happen, it'd be useful to be explicit about which contracts each method has, as that'd appear in Intelisense. Also, are there any books around that go a bit deeper on the principles and usage of Design by Contracts? One thing is having knowledge of the syntax of how to use Contracts in a language (C#, in this case), other is knowing how and when to use it. I read several tutorials and Jon Skeet's C# in Depth article about it, but I'd like to go a bit deeper if possible. Thanks

    Read the article

  • java : how to handle the design when template methods throw exception when overrided method not throw

    - by jiafu
    when coding. try to solve the puzzle: how to design the class/methods when InputStreamDigestComputor throw IOException? It seems we can't use this degisn structure due to the template method throw exception but overrided method not throw it. but if change the overrided method to throw it, will cause other subclass both throw it. So can any good suggestion for this case? abstract class DigestComputor{ String compute(DigestAlgorithm algorithm){ MessageDigest instance; try { instance = MessageDigest.getInstance(algorithm.toString()); updateMessageDigest(instance); return hex(instance.digest()); } catch (NoSuchAlgorithmException e) { LOG.error(e.getMessage(), e); throw new UnsupportedOperationException(e.getMessage(), e); } } abstract void updateMessageDigest(MessageDigest instance); } class ByteBufferDigestComputor extends DigestComputor{ private final ByteBuffer byteBuffer; public ByteBufferDigestComputor(ByteBuffer byteBuffer) { super(); this.byteBuffer = byteBuffer; } @Override void updateMessageDigest(MessageDigest instance) { instance.update(byteBuffer); } } class InputStreamDigestComputor extends DigestComputor{ // this place has error. due to exception. if I change the overrided method to throw it. evey caller will handle the exception. but @Override void updateMessageDigest(MessageDigest instance) { throw new IOException(); } }

    Read the article

  • is this a design pattern?

    - by Michel
    Hi all, i have to build some financial data report, and for making the calculation, there are a lot of 'if then' situations: if it's a large client, subtract 10%, if it's postal code equals '10101', add 10%, if the day is on a saturday, make a difficult calculation etc. so i once read about this kind of example, and what they did was (hope i remember well) create a class with some base info and make it possible to add all kinds of calculationobjects to it. So to put what i remembered in pseudo code Basecalc bc = new baseCalc(); //put the info in the bc so other objects can do their if bc.Add(new Largecustomercalc()); bc.Add(new PostalcodeCalc()); bc.add(new WeekdayCalc()); the the bc would run the Calc() methods of all of the added Calc objects. As i type this i think all the Calc objects must be able to see the Basecalc properties to correctly perform their calculation logic. So all the if's are in the different Calc objects and not ALL in the Basecalc. does this make sense? I was wondering if this is some kind of design pattern?

    Read the article

  • C++ socket protocol design issue (ring inclusion)

    - by Martin Lauridsen
    So I have these two classes, mpqs_client and client_protocol. The mpqs_client class handles a Boost socket connection to a server (sending and receiving messages with some specific format. Upon receiving a message, it calls a static method, parse_message(..), in the class client_protocol, and this method should analyse the message received and perform some corresponding action. Given some specific input, the parse_message method needs to send some data back to the server. As mentioned, this happens through the class mpqs_client. So I could, from mpqs_client, pass "this" to parse_message(..) in client_protocol. However, this leads to a two-way association relationship between the two classes. Something which I reckon is not desireable. Also, to implement this, I would need to include the other in each one, and this gives me a terrible pain. I am thinking this is more of a design issue. What is the best solution here? Code is posted below. Class mpqs_client: #include "mpqs_client.h" mpqs_client::mpqs_client(boost::asio::io_service& io_service, tcp::resolver::iterator endpoint_iterator) : io_service_(io_service), socket_(io_service) { ... } ... void mpqs_client::write(const network_message& msg) { io_service_.post(boost::bind(&mpqs_client::do_write, this, msg)); } Class client_protocol: #include "../network_message.hpp" #include "../protocol_consts.h" class client_protocol { public: static void parse_message(network_message& msg, mpqs_sieve **instance_, mpqs_client &client_) { ... switch (type) { case MPQS_DATA: ... break; case POLYNOMIAL_DATA: ... break; default: break; } }

    Read the article

  • Rationale of C# iterators design (comparing to C++)

    - by macias
    I found similar topic: http://stackoverflow.com/questions/56347/iterators-in-c-stl-vs-java-is-there-a-conceptual-difference Which basically deals with Java iterator (which is similar to C#) being unable to going backward. So here I would like to focus on limits -- in C++ iterator does not know its limit, you have by yourself compare the given iterator with the limit. In C# iterator knows more -- you can tell without comparing to any external reference, if the iterator is valid or not. I prefer C++ way, because once having iterator you can set any iterator as a limit. In other words if you would like to get only few elements instead of entire collection, you don't have to alter the iterator (in C++). For me it is more "pure" (clear). But of course MS knew this and C++ while designing C#. So what are the advantages of C# way? Which approach is more powerful (which leads to more elegant functions based on iterators). What do I miss? If you have thoughts on C# vs. C++ iterators design other than their limits (boundaries), please also answer. Note: (just in case) please, keep the discussion strictly technical. No C++/C# flamewar.

    Read the article

  • Design issue when having classes implement different interfaces to restrict client actions

    - by devoured elysium
    Let's say I'm defining a game class that implements two different views: interface IPlayerView { void play(); } interface IDealerView { void deal(); } The view that a game sees when playing the game, and a view that the dealer sees when dealing the game (this is, a player can't make dealer actions and a dealer can't make player actions). The game definition is as following: class Game : IPlayerView, IDealerView { void play() { ... } void deal() { ... } } Now assume I want to make it possible for the players to play the game, but not to deal it. My original idea was that instead of having public Game GetGame() { ... } I'd have something like public IPlayerView GetGame() { ... } But after some tests I realized that if I later try this code, it works: IDealerView dealerView = (IDealerView)GameClass.GetGame(); this works as lets the user act as the dealer. Am I worrying to much? How do you usually deal with this patterns? I could instead make two different classes, maybe a "main" class, the dealer class, that would act as factory of player classes. That way I could control exactly what I would like to pass on the the public. On the other hand, that turns everything a bit more complex than with this original design. Thanks

    Read the article

  • GridView edit problem If primary key is editable (design problem)

    - by Nassign
    I would like to ask about the design of table based on it's editability in a Grid View. Let me explain. For example, I have a table named ProductCustomerRel. Method 1 CustomerCode varchar PK ProductCode varchar PK StoreCode varchar PK Quantity int Note text So the combination of the CustomerCode, StoreCode and ProductCode must be unique. The record is displayed on a gridview. The requirement is that you can edit the customer, product and storecode but when the data is saved, the PK constraint must still persist. The problem here is it would be natural for a grid to be able to edit the 3 primary key, you can only achieve the update operation of the grid view by first deleting the row and then inserting the row with the updated data. An alternative to this is to just update the table and add a SeqNo, and just enforce the unique constraint of the 3 columns when inserting and updating in the grid view. Method 2 SeqNo int PK CustomerCode varchar ProductCode varchar StoreCode varchar Quantity int Note text My question is which of the two method is better? or is there another way to do this?

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • Windows Form UserControl design time properties

    - by Raffaeu
    I am struggling with a UserControl. I have a UserControl that represent a Pager and it has a Presenter object property exposed in this way: [Browsable(false)] [DesignSerializationAttribute(DesignSerializationAttribute.Hidden)] public object Presenter { get; set; } The code itself works as I can drag and drop a control into a Windows From without having Visual Studio initializing this property. Now, because in the Load event of this control I call a method of the Presenter that at run-time is null ... I have introduced this additional code: public override void OnLoad(...) { if (this.DesignMode) { base.OnLoad(e); return; } presenter.OnViewReady(); } Now, every time I open a Window that contains this UserControl, Visual Studio modifies the Windows designer code. So, as soon as I open it, VS ask me if I want to save it ... and of course, if I add a control to the Window, it doesn't keep the changes ... As soon as I remove the UserControl Pager the problem disappears ... How should I tackle that in the proper way? I just don't want that the presenter property is initialized at design time as it is injected at runtime ...

    Read the article

  • Design Pattern for Changing Object

    - by user210757
    Is there a Design Pattern for supporting different permutations object? Version 1 public class myOjbect { public string field1 { get; set; } /* requirements: max length 20 */ public int field2 { get; set; } . . . public decimal field200 { get; set; } } Version 2 public class myObject { public string field1 { get; set; } /* requirements: max length 40 */ public int field2 { get; set; } . . . public double field200 { get; set; } /* changed data types */ . . ./* 10 new properties */ public double field210 { get; set; } } of course I could just have separate objects, but thought there might be a good pattern for this sort of thing.

    Read the article

  • Design pattern for loading multiple message types

    - by lukem00
    As I was looking through SO I came across a question about handling multiple message types. My concern is - how do I load such a message in a neat way? I decided to have a separate class with a method which loads one message each time it's invoked. This method should create a new instance of a concrete message type (say AlphaMessage, BetaMessage, GammaMessage, etc.) and return it as a Message. class MessageLoader { public Message Load() { // ... } } The code inside the method is something which looks really awful to me and I would very much like to refactor it/get rid of it: Message msg = Message.Load(...); // load yourself from whatever source if (msg.Type == MessageType.Alpha) return new AlphaMessage(msg); if (msg.Type == MessageType.Beta) return new BetaMessage(msg); // ... In fact, if the whole design looks just too messy and you guys have a better solution, I'm ready to restructure the whole thing. If my description is too chaotic, please let me know what it's missing and I shall edit the question. Thank you all.

    Read the article

  • Design Advice Needed For Synonyms Database

    - by James J
    I'm planning to put together a database that can be used to query synonyms of words. The database will end up huge, so the idea is to keep things running fast. I've been thinking about how to do this, but my database design skills are not up to scratch these days. My initial idea was to have each word stored in one table, and then another table with a 1 to many relationship where each word can be linked to another word and that table can be queried. The application I'm developing allows users to highlight a word, and then type in, or select some synonyms from the database for that word. The application learns from the user input so if someone highlights "car" and types in "motor" the database would be updated to link the relationship if it don't exist already. What I don't want to happen is for a user to type in the word "shop" and link it to the word car. So I'm thinking I will need to add some sort of weight to each relationship. Eventually the synonyms the users enter will be used so they can auto select common synonyms used with a certain word. The lower weight words will not be displayed so shop could never be a synonym of car unless it had a very high weight, and chances are nobody is going to do that. Does the above sound right? Can you offer any suggestions or improvements?

    Read the article

  • Objective-C wrapper API design methodology

    - by Wade Williams
    I know there's no one answer to this question, but I'd like to get people's thoughts on how they would approach the situation. I'm writing an Objective-C wrapper to a C library. My goals are: 1) The wrapper use Objective-C objects. For example, if the C API defines a parameter such as char *name, the Objective-C API should use name:(NSString *). 2) The client using the Objective-C wrapper should not have to have knowledge of the inner-workings of the C library. Speed is not really any issue. That's all easy with simple parameters. It's certainly no problem to take in an NSString and convert it to a C string to pass it to the C library. My indecision comes in when complex structures are involved. Let's say you have: struct flow { long direction; long speed; long disruption; long start; long stop; } flow_t; And then your C API call is: void setFlows(flow_t inFlows[4]); So, some of the choices are: 1) expose the flow_t structure to the client and have the Objective-C API take an array of those structures 2) build an NSArray of four NSDictionaries containing the properties and pass that as a parameter 3) create an NSArray of four "Flow" objects containing the structure's properties and pass that as a parameter My analysis of the approaches: Approach 1: Easiest. However, it doesn't meet the design goals Approach 2: For some reason, this seems to me to be the most "Objective-C" way of doing it. However, each element of the NSDictionary would have to be wrapped in an NSNumber. Now it seems like we're doing an awful lot just to pass the equivalent of a struct. Approach 3: Seems the cleanest to me from an object-oriented standpoint and the extra encapsulation could come in handy later. However, like #2, it now seems like we're doing an awful lot (creating an array, creating and initializing objects) just to pass a struct. So, the question is, how would you approach this situation? Are there other choices I'm not considering? Are there additional advantages or disadvantages to the approaches I've presented that I'm not considering?

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • Design question for WinForms (C#) app, using Entity Framework

    - by cdotlister
    I am planning on writing a small home budget application for myself, as a learning excercise. I have built my database (SQL Server), and written a small console application to interact with it, and test out scenarios on my database. Once I am happy, my next step would be to start building the application - but I am already wondering what the best/standard design would be. I am palnning on using Entity Framework for handling my database entities... then linq to sql/objects for getting the data, all running under a WinForms (for now) application. My plan (I've never used EF... and most of my development background is Web apps) is to have my database... with Entity Framework in it's own project.. which has the connection to the database. This project would expose methods such as 'GetAccount()', 'GetAccount(int accountId)' etc. I'd then have a service project that references my EF project. And on top of that, my GUI project, which makes the calls to my service project. But I am stuck. Lets say I have a screen that displays a list of Account types (Debit, Credit, Loan...). Once I have selected one, the next drop down shows a list of accounts I have that suite that account type. So, my OnChange event on my DropDown on the account type control will make a call to the serviceLayer project, 'GetAccountTypes()', and I would expect back a List< of Account Types. However, the AccountType object ... what is that? That can't be the AccountType object from my EF project, as my GUI project doesn't have reference to it. Would I have to have some sort of Shared Library, shared between my GUI and my Service project, with a custom built AccountType object? The GUI can then expect back a list of these. So my service layer would have a method: public List<AccountType> GetAccountTypes() That would then make a call to a custom method in my EF project, which would probably be the same as the above method, except, it returns an list of EF.Data.AccountType (The Entity Framework generated Account Type object). The method would then have the linq code to get the data as I want it. Then my service layer will get that object, and transform it unto my custom AccountType object, and return it to the GUI. Does that sound at all like a good plan?

    Read the article

  • Design pattern question: encapsulation or inheritance

    - by Matt
    Hey all, I have a question I have been toiling over for quite a while. I am building a templating engine with two main classes Template.php and Tag.php, with a bunch of extension classes like Img.php and String.php. The program works like this: A Template object creates a Tag objects. Each tag object determines which extension class (img, string, etc.) to implement. The point of the Tag class is to provide helper functions for each extension class such as wrap('div'), addClass('slideshow'), etc. Each Img or String class is used to render code specific to what is required, so $Img->render() would give something like <img src='blah.jpg' /> My Question is: Should I encapsulate all extension functionality within the Tag object like so: Tag.php function __construct($namespace, $args) { // Sort out namespace to determine which extension to call $this->extension = new $namespace($this); // Pass in Tag object so it can be used within extension return $this; // Tag object } function render() { return $this->extension->render(); } Img.php function __construct(Tag $T) { $args = $T->getArgs(); $T->addClass('img'); } function render() { return '<img src="blah.jpg" />'; } Usage: $T = new Tag("img", array(...); $T->render(); .... or should I create more of an inheritance structure because "Img is a Tag" Tag.php public static create($namespace, $args) { // Sort out namespace to determine which extension to call return new $namespace($args); } Img.php class Img extends Tag { function __construct($args) { // Determine namespace then call create tag $T = parent::__construct($namespace, $args); } function render() { return '<img src="blah.jpg" />'; } } Usage: $Img = Tag::create('img', array(...)); $Img->render(); One thing I do need is a common interface for creating custom tags, ie I can instantiate Img(...) then instantiate String(...), I do need to instantiate each extension using Tag. I know this is somewhat vague of a question, I'm hoping some of you have dealt with this in the past and can foresee certain issues with choosing each design pattern. If you have any other suggestions I would love to hear them. Thanks! Matt Mueller

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >