Search Results

Search found 23667 results on 947 pages for 'level design'.

Page 210/947 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • redoing object model construction to fit with asynchronous data fetching

    - by Andrew Patterson
    I have a modeled a set of objects that correspond with some real world concepts. TradeDrug, GenericDrug, TradePackage, DrugForm Underlying the simple object model I am trying to provide is a complex medical terminology that uses numeric codes to represent relationships and concepts, all accessible via a REST service - I am trying to hide away some of that complexity with an object wrapper. To give a concrete example I can call TradeDrug d = Searcher.FindTradeDrug("Zoloft") or TradeDrug d = new TradeDrug(34) where 34 might be the code for Zoloft. This will consult a remote server to find out some details about Zoloft. I might then call GenericDrug generic = d.EquivalentGeneric() System.Out.WriteLine(generic.ActiveIngredient().Name) in order to get back the generic drug sertraline as an object (again via a background REST call to the remote server that has all these drug details), and then perhaps find its ingredient. This model works fine and is being used in some applications that involve data processing. Recently however I wanted to do a silverlight application that used and displayed these objects. The silverlight environment only allows asynchronous REST/web service calls. I have no problems with how to make the asychhronous calls - but I am having trouble with what the design should be for my object construction. Currently the constructors for my objects do some REST calls sychronously. public TradeDrug(int code) { form = restclient.FetchForm(code) name = restclient.FetchName(code) etc.. } If I have to use async 'events' or 'actions' in order to use the Silverlight web client (I know silverlight can be forced to be a synchronous client but I am interested in asychronous approaches), does anyone have an guidance or best practice for how to structure my objects. I can pass in an action callback to the constructor public TradeDrug(int code, Action<TradeDrug> constructCompleted) { } but this then allows the user to have a TradeDrug object instance before what I want to construct is actually finished. It also doesn't support an 'event' async pattern because the object doesn't exist to add the event to until it is constructed. Extending that approach might be a factory object that itself has an asynchronous interface to objects factory.GetTradeDrugAsync(code, completedaction) or with a GetTradeDrugCompleted event? Does anyone have any recommendations?

    Read the article

  • Network communication for a turn based board game

    - by randooom
    Hi all, my first question here, so please don't be to harsh if something went wrong :) I'm currently a CS student (from Germany, if this info is of any use ;) ) and we got a, free selectable, programming assignment, which we have to write in a C++/CLI Windows Forms Application. My team, two others and me, decided to go for a network-compatible port of the board game Risk. We divided the work in 3 Parts, namely UI, game logic and network. Now we're on the part where we have to get everything working together and the big question mark is, how to get the clients synchronized with each other? Our approach so far is, that each client has all information necessary to calculate and/or execute all possible actions. Actually the clients have all information available at all, aside from the game-initializing phase (add players, select map, etc.), which needs one "super-client" with some extra stuff to control things. This is the standard scenario of our approach: player performs action, the action is valid and got executed on the players client action is sent over the network action is executed on the other clients The design (i.e. no or code so far) we came up with so far, is something like the following pseudo sequence diagram. Gui, Controller and Network implement all possible actions (i.e. all actions which change data) as methods from an interface. So each part can implement the method in a way to get their job done. Example with Action(): On the player side's Client: Player-->Gui.Action() Gui-->Controller.Action() Controller-->Logic.Action (Logic.Action() == NoError)? Controller-->Network.Action() Network-->Parser.ParseAction() Network.Send(msg) On all other clients: Network.Recv(msg) Network-->Parser.Deparse(msg) Parser-->Logic.Action() Logic-->Gui.Action() The questions: Is this a viable approach to our task? Any better/easier way to this? Recommendations, critique? Our knowledge (so you can better target your answer): We are on the beginner side, in regards to programming on a somewhat larger projects with a small team. All of us have some general programming experience and basic understanding of the .Net Libraries and Windows Forms. If you need any further information, please feel free to ask.

    Read the article

  • Have problems designing input and output for a calculator app

    - by zoul
    Hello! I am writing a button calculator. I have the code split into model, view and a controller. The model knows nothing about formatting, it is only concerned with numbers. All formatting is done in the view. The model gets its input as keypresses, each keypress is a part of an enum: typedef enum { kButtonUnknown = 0, kButtonMemoryClear = 100, kButtonMemoryPlus = 112, kButtonMemoryMinus = 109, kButtonMemoryRecall = 114, kButtonClear = 99, … }; When the user presses a button (say 1), the model receives a button code (kButtonNum1), adds the corresponding number to a string input buffer ("1") and updates the numeric output value (1.0). The controller then passes the numeric output value to the view that formats it (1). This is all plain, simple and clean, but does not really work. The problem is that when user enters a part of a number (say 0.00, going to enter 0.001), the input does not survive the way through model to view and the display says 0 instead of 0.00. I know why this happens ("0.00"::string parses to 0::double and that gets formatted as 0). What I don’t know is how to design the calculator so that the code stays clean and simple and the numbers will show up on screen exactly as the user types them. I’ve already come with some kind of solution, but that’s essentially a hack and breaks the beautiful and simple flow from the calculator model to the display. Ideas? Current solution keeps track of the calculator state. If the calculator is building a number, I take the calculator input buffer (a string) and directly set the display contents (also a string). Otherwise I take the proper path, ie. take the numeric calculator output, pass it to the view as a double and the view uses its internal formatter to create a string for the display. Example input: input | display | mode ------+---------+------------ 0 | 0 | from string 0. | 0. | from string 0.0 | 0.0 | from string 0.0+ | 0 | from number This is ugly. (1) The calculator has to expose its input buffer and state. (2) The view has to expose its display and allow setting its contents directly using a string. (3) I have to duplicate some of the formatting code to format the string I got from the calculator input buffer. If user enters 12345.000, I have to display 12,345.000 and therefore I’ve got to have a commification code for strings. Yuck.

    Read the article

  • Any simple approaches for managing customer data change requests for global reference files?

    - by Kelly Duke
    For the first time, I am developing in an environment in which there is a central repository for a number of different industry standard reference data tables and many different customers who need to select records from these industry standard reference data tables to fill in foreign key information for their customer specific records. Because these industry standard reference files are utilized by all customers, I want to reserve Create/Update/Delete access to these records for global product administrators. However, I would like to implement a (semi-)automated interface by which specific customers could request record additions, deletions or modifications to any of the industry standard reference files that are shared among all customers. I know I need something like a "data change request" table specifying: user id, user request datetime, request type (insert, modify, delete), a user entered text explanation of the change request, the user request's current status (pending, declined, completed), admin resolution datetime, admin id, an admin entered text description of the resolution, etc. What I can't figure out is how to elegantly handle the fact that these data change requests could apply to dozens of different tables with differing table column definitions. I would like to give the customer users making these data change requests a convenient way to enter their proposed record additions/modifications directly into CRUD screens that look very much like the reference table CRUD screens they don't have write/delete permissions for (with an additional text explanation and perhaps request priority field). I would also like to give the global admins a tool that allows them to view all the outstanding data change requests for the users they oversee sorted by date requested or user/date requested. Upon selecting a data change request record off the list, the admin would be directed to another CRUD screen that would be populated with the fields the customer users requested for the new/modified industry standard reference table record along with customer's text explanation, the request status and the text resolution explanation field. At this point the admin could accept/edit/reject the requested change and if accepted the affected industry standard reference file would be automatically updated with the appropriate fields and the data change request record's status, text resolution explanation and resolution datetime would all also be appropriately updated. However, I want to keep the actual production reference tables as simple as possible and free from these extraneous and typically null customer change request fields. I'd also like the data change request file to aggregate all data change requests across all the reference tables yet somehow "point to" the specific reference table and primary key in question for modification & deletion requests or the specific reference table and associated customer user entered field values in question for record creation requests. Does anybody have any ideas of how to design something like this effectively? Is there a cleaner, simpler way I am missing? Thank you so much for reading.

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • How can i convert this to a factory/abstract factory?

    - by Amitd
    I'm using MigraDoc to create a pdf document. I have business entities similar to the those used in MigraDoc. public class Page{ public List<PageContent> Content { get; set; } } public abstract class PageContent { public int Width { get; set; } public int Height { get; set; } public Margin Margin { get; set; } } public class Paragraph : PageContent{ public string Text { get; set; } } public class Table : PageContent{ public int Rows { get; set; } public int Columns { get; set; } //.... more } In my business logic, there are rendering classes for each type public interface IPdfRenderer<T> { T Render(MigraDoc.DocumentObjectModel.Section s); } class ParagraphRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Paragraph> { BusinessEntities.PDF.Paragraph paragraph; public ParagraphRenderer(BusinessEntities.PDF.Paragraph p) { paragraph = p; } public MigraDoc.DocumentObjectModel.Paragraph Render(MigraDoc.DocumentObjectModel.Section s) { var paragraph = s.AddParagraph(); // add text from paragraph etc return paragraph; } } public class TableRenderer : IPdfRenderer<MigraDoc.DocumentObjectModel.Tables.Table> { BusinessEntities.PDF.Table table; public TableRenderer(BusinessEntities.PDF.Table t) { table =t; } public MigraDoc.DocumentObjectModel.Tables.Table Render(Section obj) { var table = obj.AddTable(); //fill table based on table } } I want to create a PDF page as : var document = new Document(); var section = document.AddSection();// section is a page in pdf var page = GetPage(1); // get a page from business classes foreach (var content in page.Content) { //var renderer = createRenderer(content); // // get Renderer based on Business type ?? // renderer.Render(section) } For createRenderer() i can use switch case/dictionary and return type. How can i get/create the renderer generically based on type ? How can I use factory or abstract factory here? Or which design pattern better suits this problem?

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Grid sorting with persistent master sort

    - by MikeWyatt
    I have a UI with a grid. Each record in the grid is sorted by a "master" sort column, let's call it a page number. Each record is a story in a magazine. I want the user to be able to drag and drop a record to a new position in the grid and automatically update the page number field to reflect the updated position. Easy enough, right? Now imagine that I also want to have the grid sortable by any other column (story title, section, author name, etc.). How does the drag and drop operation work now? Revert to page number sort during or after the drag and drop operation? This could confuse the user (why did my sort just change?). It would also result in arbitrary row positioning. Would the story now be before the row that was after it when the user dropped it? Or, would it be after the row that was before it? Those rows may now be widely separated after the master order sort. Disable the drag and drop feature if the grid isn't currently sorted by the page number? This would be easy, but the user might wonder why he can't drag and drop at certain times. Knowing to first sort by page number may not be very intuitive. Let the user rearrange his rows, but not make any changes to the page number? Require the user to enter a "Arrange Stories" mode, in which the grid sort is temporarily switched to page number and drag and drop is enabled? They would then exit the mode, and the previous sort would be reapplied. The big difference between this and the second option is that it would be more explicit than simply clicking on a column header. Any other ideas, or reasons why one of the above is the way to go? EDIT I'd like to point out that any of the above is technically possible, and easy to implement. My question is design-related. What is the most intuitive way to solve this problem, from the user's perspective?

    Read the article

  • Is it ok to dynamic cast "this" as a return value?

    - by Panayiotis Karabassis
    This is more of a design question. I have a template class, and I want to add extra methods to it depending on the template type. To practice the DRY principle, I have come up with this pattern (definitions intentionally omitted): template <class T> class BaseVector: public boost::array<T, 3> { protected: BaseVector<T>(const T x, const T y, const T z); public: bool operator == (const Vector<T> &other) const; Vector<T> operator + (const Vector<T> &other) const; Vector<T> operator - (const Vector<T> &other) const; Vector<T> &operator += (const Vector<T> &other) { (*this)[0] += other[0]; (*this)[1] += other[1]; (*this)[2] += other[2]; return *dynamic_cast<Vector<T> * const>(this); } } template <class T> class Vector : public BaseVector<T> { public: Vector<T>(const T x, const T y, const T z) : BaseVector<T>(x, y, z) { } }; template <> class Vector<double> : public BaseVector<double> { public: Vector<double>(const double x, const double y, const double z); Vector<double>(const Vector<int> &other); double norm() const; }; I intend BaseVector to be nothing more than an implementation detail. This works, but I am concerned about operator+=. My question is: is the dynamic cast of the this pointer a code smell? Is there a better way to achieve what I am trying to do (avoid code duplication, and unnecessary casts in the user code)? Or am I safe since, the BaseVector constructor is private?

    Read the article

  • Foreign key pointing to different tables

    - by Álvaro G. Vicario
    I'm implementing a table per subclass design I discussed in a previous question. It's a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. I have a master table that holds common attributes: product_type ============ product_type_id INT product_type_name VARCHAR E.g.: 1 'Magazine' 2 'Web site' product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id valid_since DATETIME valid_to DATETIME E.g. 1 'Foo Magazine' 1 '1998-12-01' NULL 2 'Bar Weekly Review' 1 '2005-01-01' NULL 3 'E-commerce App' 2 '2009-10-15' NULL 4 'CMS' 2 '2010-02-01' NULL ... and one subtable for each product type: item_magazine ============= item_magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id issue_number INT pages INT copies INT close_date DATETIME release_date DATETIME E.g. 1 'Foo Magazine Regular Issue' 1 89 52 150000 '2010-06-25' '2010-06-31' 2 'Foo Magazine Summer Special' 1 90 60 175000 '2010-07-25' '2010-07-31' 3 'Bar Weekly Review Regular Issue' 2 12 16 20000 '2010-06-01' '2010-06-02' item_web_site ============= item_web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id bandwidth INT hits INT date_from DATETIME date_to DATETIME E.g. 1 'The Carpet Store' 3 10 90000 '2010-06-01' NULL 2 'Penauts R Us' 3 20 180000 '2010-08-01' NULL 3 'Springfield Cattle Fair' 4 15 150000 '2010-05-01' '2010-10-31' Now I want to add some fees that relate to one specific item. Since there are very little subtypes, it's feasible to do this: fee === fee_id INT fee_description VARCHAR item_magazine_id INT -> Foreign key to item_magazine.item_magazine_id item_web_site_id INT -> Foreign key to item_web_site.item_web_site_id net_price DECIMAL E.g.: 1 'Front cover' 2 NULL 1999.99 2 'Half page' 2 NULL 500.00 3 'Square banner' NULL 3 790.50 4 'Animation' NULL 3 2000.00 I have tight foreign keys to handle cascaded editions and I presume I can add a constraint so only one of the IDs is NOT NULL. However, my intuition suggests that it would be cleaner to get rid of the item_WHATEVER_id columns and keep a separate table: fee_to_item =========== fee_id INT -> Foreign key to fee.fee_id product_id INT -> Foreign key to product.product_id item_id INT -> ??? But I can't figure out how to create foreign keys on item_id since the source table varies depending on product_id. Should I stick to my original idea?

    Read the article

  • Is this a problem typically solved with IOC?

    - by Dirk
    My current application allows users to define custom web forms through a set of admin screens. it's essentially an EAV type application. As such, I can't hard code HTML or ASP.NET markup to render a given page. Instead, the UI requests an instance of a Form object from the service layer, which in turn constructs one using a several RDMBS tables. Form contains the kind of classes you would expect to see in such a context: Form= IEnumerable<FormSections>=IEnumerable<FormFields> Here's what the service layer looks like: public class MyFormService: IFormService{ public Form OpenForm(int formId){ //construct and return a concrete implementation of Form } } Everything works splendidly (for a while). The UI is none the wiser about what sections/fields exist in a given form: It happily renders the Form object it receives into a functional ASP.NET page. A few weeks later, I get a new requirement from the business: When viewing a non-editable (i.e. read-only) versions of a form, certain field values should be merged together and other contrived/calculated fields should are added. No problem I say. Simply amend my service class so that its methods are more explicit: public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId){ //construct and a concrete implementation of Form //apply additional transformations to the form } } Again everything works great and balance has been restored to the force. The UI continues to be agnostic as to what is in the Form, and our separation of concerns is achieved. Only a few short weeks later, however, the business puts out a new requirement: in certain scenarios, we should apply only some of the form transformations I referenced above. At this point, it feels like the "explicit method" approach has reached a dead end, unless I want to end up with an explosion of methods (OpenFormViewingScenario1, OpenFormViewingScenario2, etc). Instead, I introduce another level of indirection: public interface IFormViewCreator{ void CreateView(Form form); } public class MyFormService: IFormService{ public Form OpenFormForEditing(int formId){ //construct and return a concrete implementation of Form } public Form OpenFormForViewing(int formId, IFormViewCreator formViewCreator){ //construct a concrete implementation of Form //apply transformations to the dynamic field list return formViewCreator.CreateView(form); } } On the surface, this seems like acceptable approach and yet there is a certain smell. Namely, the UI, which had been living in ignorant bliss about the implementation details of OpenFormForViewing, must possess knowledge of and create an instance of IFormViewCreator. My questions are twofold: Is there a better way to achieve the composability I'm after? (perhaps by using an IoC container or a home rolled factory to create the concrete IFormViewCreator)? Did I fundamentally screw up the abstraction here?

    Read the article

  • When does logic belong in the Business Object/Entity, and when does it belong in a Service?

    - by Casey
    In trying to understand Domain Driven Design I keep returning to a question that I can't seem to definitively answer. How do you determine what logic belongs to a Domain entity, and what logic belongs to a Domain Service? Example: We have an Order class for an online store. This class is an entity and an aggregate root (it contains OrderItems). Public Class Order:IOrder { Private List<IOrderItem> OrderItems Public Order(List<IOrderItem>) { OrderItems = List<IOrderItem> } Public Decimal CalculateTotalItemWeight() //This logic seems to belong in the entity. { Decimal TotalWeight = 0 foreach(IOrderItem OrderItem in OrderItems) { TotalWeight += OrderItem.Weight } return TotalWeight } } I think most people would agree that CalculateTotalItemWeight belongs on the entity. However, at some point we have to ship this order to the customer. To accomplish this we need to do two things: 1) Determine the postage rate necessary to ship this order. 2) Print a shipping label after determining the postage rate. Both of these actions will require dependencies that are outside the Order entity, such as an external webservice to retrieve postage rates. How should we accomplish these two things? I see a few options: 1) Code the logic directly in the domain entity, like CalculateTotalItemWeight. We then call: Order.GetPostageRate Order.PrintLabel 2) Put the logic in a service that accepts IOrder. We then call: PostageService.GetPostageRate(Order) PrintService.PrintLabel(Order) 3) Create a class for each action that operates on an Order, and pass an instance of that class to the Order through Constructor Injection (this is a variation of option 1 but allows reuse of the RateRetriever and LabelPrinter classes): Public Class Order:IOrder { Private List<IOrderItem> OrderItems Private RateRetriever _Retriever Private LabelPrinter _Printer Public Order(List<IOrderItem>, RateRetriever Retriever, LabelPrinter Printer) { OrderItems = List<IOrderItem> _Retriever = Retriever _Printer = Printer } Public Decimal GetPostageRate { _Retriever.GetPostageRate(this) } Public void PrintLabel { _Printer.PrintLabel(this) } } Which one of these methods do you choose for this logic, if any? What is the reasoning behind your choice? Most importantly, is there a set of guidelines that led you to your choice?

    Read the article

  • How to create a column containing a string of stars to inidcate levels of a factor in a data frame i

    - by PaulHurleyuk
    (second question today - must be a bad day) I have a dataframe with various columns, inculding a concentration column (numeric), a flag highlighting invalid results (boolean) and a description of the problem (character) dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), rawconc = c(77.4, 52.6, 86.5, 44.5, 167, 16.2, 59.3, 123, 1.95, 181), reason = structure(c(NA, NA, 2L, NA, NA, NA, 2L, 1L, NA, NA), .Label = c("Fails Acceptance Criteria", "Poor Injection"), class = "factor"), flag = c("False", "False", "True", "False", "False", "False", "True", "True", "False", "False" )), .Names = c("x", "rawconc", "reason", "flag"), row.names = c(NA, -10L), class = "data.frame") I can create a column with the numeric level of the reason column df$level<-as.numeric(df$reason) df x rawconc reason flag level 1 1 77.40 <NA> False NA 2 2 52.60 <NA> False NA 3 3 86.50 Poor Injection True 2 4 4 44.50 <NA> False NA 5 5 167.00 <NA> False NA 6 6 16.20 <NA> False NA 7 7 59.30 Poor Injection True 2 8 8 123.00 Fails Acceptance Criteria True 1 9 9 1.95 <NA> False NA 10 10 181.00 <NA> False NA and here's what I want to do to create a column with 'level' many stars, but it fails df$stars<-paste(rep("*",df$level)sep="",collapse="") Error: unexpected symbol in "df$stars<-paste(rep("*",df$level)sep" df$stars<-paste(rep("*",df$level),sep="",collapse="") Error in rep("*", df$level) : invalid 'times' argument rep("*",df$level) Error in rep("*", df$level) : invalid 'times' argument df$stars<-paste(rep("*",pmax(df$level,0,na.rm=TRUE)),sep="",collapse="") Error in rep("*", pmax(df$level, 0, na.rm = TRUE)) : invalid 'times' argument It seems that rep needs to be fed one value at a time. I feel that this should be possible (and my gut says 'use lapply' but my apply fu is v. poor) ANy one want to try ?

    Read the article

  • Contructor parameters for dependent classes with Unity Framework

    - by Onisemus
    I just started using the Unity Application Block to try to decouple my classes and make it easier for unit testing. I ran into a problem though that I'm not sure how to get around. Looked through the documentation and did some Googling but I'm coming up dry. Here's the situation: I have a facade-type class which is a chat bot. It is a singleton class which handles all sort of secondary classes and provides a central place to launch and configure the bot. I also have a class called AccessManager which, well, manages access to bot commands and resources. Boiled down to the essence, I have the classes set up like so. public class Bot { public string Owner { get; private set; } public string WorkingDirectory { get; private set; } private IAccessManager AccessManager; private Bot() { // do some setup // LoadConfig sets the Owner & WorkingDirectory variables LoadConfig(); // init the access mmanager AccessManager = new MyAccessManager(this); } public static Bot Instance() { // singleton code } ... } And the AccessManager class: public class MyAccessManager : IAccessManager { private Bot botReference; public MyAccesManager(Bot botReference) { this.botReference = botReference; SetOwnerAccess(botReference.Owner); } private void LoadConfig() { string configPath = Path.Combine( botReference.WorkingDirectory, "access.config"); // do stuff to read from config file } ... } I would like to change this design to use the Unity Application Block. I'd like to use Unity to generate the Bot singleton and to load the AccessManager interface in some sort of bootstrapping method that runs before anything else does. public static void BootStrapSystem() { IUnityContainer container = new UnityContainer(); // create new bot instance Bot newBot = Bot.Instance(); // register bot instance container.RegisterInstance<Bot>(newBot); // register access manager container.RegisterType<IAccessManager,MyAccessManager>(newBot); } And when I want to get a reference to the Access Manager inside the Bot constructor I can just do: IAcessManager accessManager = container.Resolve<IAccessManager>(); And elsewhere in the system to get a reference to the Bot singleton: // do this Bot botInstance = container.Resolve<Bot>(); // instead of this Bot botInstance = Bot.Instance(); The problem is the method BootStrapSystem() is going to blow up. When I create a bot instance it's going to try to resolve IAccessManager but won't be able to because I haven't registered the types yet (that's the next line). But I can't move the registration in front of the Bot creation because as part of the registration I need to pass the Bot as a parameter! Circular dependencies!! Gah!!! This indicates to me I have a flaw in the way I have this structured. But how do I fix it? Help!!

    Read the article

  • SDL side-scroller scrolls inconsistantly

    - by SDLFunTimes
    So I'm working on an upgrade from my previous project (that I posted here for code review) this time implementing a repeating background (like what is used on cartoons) so that SDL doesn't have to load really big images for a level. There's a strange inconsistency in the program, however: the first time the user scrolls all the way to the right 2 less panels are shown than is specified. Going backwards (left) the correct number of panels is shown (that is the panels repeat the number of times specified in the code). After that it appears that going right again (once all the way at the left) the correct number of panels is shown and same going backwards. Here's some selected code and here's a .zip of all my code constructor: Game::Game(SDL_Event* event, SDL_Surface* scr, int level_w, int w, int h, int bpp) { this->event = event; this->bpp = bpp; level_width = level_w; screen = scr; w_width = w; w_height = h; //load images and set rects background = format_surface("background.jpg"); person = format_surface("person.png"); background_rect_left = background->clip_rect; background_rect_right = background->clip_rect; current_background_piece = 1; //we are displaying the first clip rect_in_view = &background_rect_right; other_rect = &background_rect_left; person_rect = person->clip_rect; background_rect_left.x = 0; background_rect_left.y = 0; background_rect_right.x = background->w; background_rect_right.y = 0; person_rect.y = background_rect_left.h - person_rect.h; person_rect.x = 0; } and here's the move method which is probably causing all the trouble: void Game::move(SDLKey direction) { if(direction == SDLK_RIGHT) { if(move_screen(direction)) { if(!background_reached_right()) { //move background right background_rect_left.x += movement_increment; background_rect_right.x += movement_increment; if(rect_in_view->x >= 0) { //move the other rect in to fill the empty space SDL_Rect* temp; other_rect->x = -w_width + rect_in_view->x; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece++; std::cout << current_background_piece << std::endl; } if(background_overshoots_right()) { //sees if this next blit is past the surface //this is used only for re-aligning the rects when //the end of the screen is reached background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x += movement_increment; if(get_person_right_side() > w_width) { //person went too far right person_rect.x = w_width - person_rect.w; } } } else if(direction == SDLK_LEFT) { if(move_screen(direction)) { if(!background_reached_left()) { //moves background left background_rect_left.x -= movement_increment; background_rect_right.x -= movement_increment; if(rect_in_view->x <= -w_width) { //swap the rect in view SDL_Rect* temp; rect_in_view->x = w_width; temp = rect_in_view; rect_in_view = other_rect; other_rect = temp; current_background_piece--; std::cout << current_background_piece << std::endl; } if(background_overshoots_left()) { background_rect_left.x = 0; background_rect_right.x = w_width; } } } else { //move the person instead person_rect.x -= movement_increment; if(person_rect.x < 0) { //person went too far left person_rect.x = 0; } } } } without the rest of the code this doesn't make too much sense. Since there is too much of it I'll upload it here for testing. Anyway does anyone know how I could fix this inconsistency?

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • How to generate a Program template by generating an abstract class

    - by Byron-Lim Timothy Steffan
    i have the following problem. The 1st step is to implement a program, which follows a specific protocol on startup. Therefore, functions as onInit, onConfigRequest, etc. will be necessary. (These are triggered e.g. by incoming message on a TCP Port) My goal is to generate a class for example abstract one, which has abstract functions as onInit(), etc. A programmer should just inherit from this base class and should merely override these abstract functions of the base class. The rest as of the protocol e.g. should be simply handled in the background (using the code of the base class) and should not need to appear in the programmers code. What is the correct design strategy for such tasks? and how do I deal with, that the static main method is not inheritable? What are the key-tags for this problem? (I have problem searching for a solution since I lack clear statements on this problem) Goal is to create some sort of library/class, which - included in ones code - results in executables following the protocol. EDIT (new explanation): Okay let me try to explain more detailled: In this case programs should be clients within a client server architecture. We have a client server connection via TCP/IP. Each program needs to follow a specific protocol upon program start: As soon as my program starts and gets connected to the server it will receive an Init Message (TcpClient), when this happens it should trigger the function onInit(). (Should this be implemented by an event system?) After onInit() a acknowledgement message should be sent to the server. Afterwards there are some other steps as e.g. a config message from the server which triggers an onConfig and so on. Let's concentrate on the onInit function. The idea is, that onInit (and onConfig and so on) should be the only functions the programmer should edit while the overall protocol messaging is hidden for him. Therefore, I thought using an abstract class with the abstract methods onInit(), onConfig() in it should be the right thing. The static Main class I would like to hide, since within it e.g. there will be some part which connects to the tcp port, which reacts on the Init Message and which will call the onInit function. 2 problems here: 1. the static main class cant be inherited, isn it? 2. I cannot call abstract functions from the main class in the abstract master class. Let me give an Pseudo-example for my ideas: public abstract class MasterClass { static void Main(string[] args){ 1. open TCP connection 2. waiting for Init Message from server 3. onInit(); 4. Send Acknowledgement, that Init Routine has ended successfully 5. waiting for Config message from server 6..... } public abstract void onInit(); public abstract void onConfig(); } I hope you get the idea now! The programmer should afterwards inherit from this masterclass and merely need to edit the functions onInit and so on. Is this way possible? How? What else do you recommend for solving this? EDIT: The strategy ideo provided below is a good one! Check out my comment on that.

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • Advice on Factory Method

    - by heath
    Using php 5.2, I'm trying to use a factory to return a service to the controller. My request uri would be of the format www.mydomain.com/service/method/param1/param2/etc. My controller would then call a service factory using the token sent in the uri. From what I've seen, there are two main routes I could go with my factory. Single method: class ServiceFactory { public static function getInstance($token) { switch($token) { case 'location': return new StaticPageTemplateService('location'); break; case 'product': return new DynamicPageTemplateService('product'); break; case 'user' return new UserService(); break; default: return new StaticPageTemplateService($token); } } } or multiple methods: class ServiceFactory { public static function getLocationService() { return new StaticPageTemplateService('location'); } public static function getProductService() { return new DynamicPageTemplateService('product'); } public static function getUserService() { return new UserService(); } public static function getDefaultService($token) { return new StaticPageTemplateService($token); } } So, given this, I will have a handful of generic services in which I will pass that token (for example, StaticPageTemplateService and DynamicPageTemplateService) that will probably implement another factory method just like this to grab templates, domain objects, etc. And some that will be specific services (for example, UserService) which will be 1:1 to that token and not reused. So, this seems to be an ok approach (please give suggestions if it is not) for a small amount of services. But what about when, over time and my site grows, I end up with 100s of possibilities. This no longer seems like a good approach. Am I just way off to begin with or is there another design pattern that would be a better fit? Thanks. UPDATE: @JSprang - the token is actually sent in the uri like mydomain.com/location would want a service specific to loction and mydomain.com/news would want a service specific to news. Now, for a lot of these, the service will be generic. For instance, a lot of pages will call a StaticTemplatePageService in which the token is passed in to the service. That service in turn will grab the "location" template or "links" template and just spit it back out. Some will need DynamicTemplatePageService in which the token gets passed in, like "news" and that service will grab a NewsDomainObject, determine how to present it and spit that back out. Others, like "user" will be specific to a UserService in which it will have methods like Login, Logout, etc. So basically, the token will be used to determine which service is needed AND if it is generic service, that token will be passed to that service. Maybe token isn't the correct terminology but I hope you get the purpose. I wanted to use the factory so I can easily swap out which Service I need in case my needs change. I just worry that after the site grows larger (both pages and functionality) that the factory will become rather bloated. But I'm starting to feel like I just can't get away from storing the mappings in an array (like Stephen's solution). That just doesn't feel OOP to me and I was hoping to find something more elegant.

    Read the article

  • Configurable Values in Enum

    - by Omer Akhter
    I often use this design in my code to maintain configurable values. Consider this code: public enum Options { REGEX_STRING("Some Regex"), REGEX_PATTERN(Pattern.compile(REGEX_STRING.getString()), false), THREAD_COUNT(2), OPTIONS_PATH("options.config", false), DEBUG(true), ALWAYS_SAVE_OPTIONS(true), THREAD_WAIT_MILLIS(1000); Object value; boolean saveValue = true; private Options(Object value) { this.value = value; } private Options(Object value, boolean saveValue) { this.value = value; this.saveValue = saveValue; } public void setValue(Object value) { this.value = value; } public Object getValue() { return value; } public String getString() { return value.toString(); } public boolean getBoolean() { Boolean booleanValue = (value instanceof Boolean) ? (Boolean) value : null; if (value == null) { try { booleanValue = Boolean.valueOf(value.toString()); } catch (Throwable t) { } } // We want a NullPointerException here return booleanValue.booleanValue(); } public int getInteger() { Integer integerValue = (value instanceof Number) ? ((Number) value).intValue() : null; if (integerValue == null) { try { integerValue = Integer.valueOf(value.toString()); } catch (Throwable t) { } } return integerValue.intValue(); } public float getFloat() { Float floatValue = (value instanceof Number) ? ((Number) value).floatValue() : null; if (floatValue == null) { try { floatValue = Float.valueOf(value.toString()); } catch (Throwable t) { } } return floatValue.floatValue(); } public static void saveToFile(String path) throws IOException { FileWriter fw = new FileWriter(path); Properties properties = new Properties(); for (Options option : Options.values()) { if (option.saveValue) { properties.setProperty(option.name(), option.getString()); } } if (DEBUG.getBoolean()) { properties.list(System.out); } properties.store(fw, null); } public static void loadFromFile(String path) throws IOException { FileReader fr = new FileReader(path); Properties properties = new Properties(); properties.load(fr); if (DEBUG.getBoolean()) { properties.list(System.out); } Object value = null; for (Options option : Options.values()) { if (option.saveValue) { Class<?> clazz = option.value.getClass(); try { if (String.class.equals(clazz)) { value = properties.getProperty(option.name()); } else { value = clazz.getConstructor(String.class).newInstance(properties.getProperty(option.name())); } } catch (NoSuchMethodException ex) { Debug.log(ex); } catch (InstantiationException ex) { Debug.log(ex); } catch (IllegalAccessException ex) { Debug.log(ex); } catch (IllegalArgumentException ex) { Debug.log(ex); } catch (InvocationTargetException ex) { Debug.log(ex); } if (value != null) { option.setValue(value); } } } } } This way, I can save and retrieve values from files easily. The problem is that I don't want to repeat this code everywhere. Like as we know, enums can't be extended; so wherever I use this, I have to put all these methods there. I want only to declare the values and that if they should be persisted. No method definitions each time; any ideas?

    Read the article

  • What does a WinForm application need to be designed for usability, and be robust, clean, and profess

    - by msorens
    One of the principal problems impeding productivity in software implementation is the classic conundrum of “reinventing the wheel”. Of late I am a .NET developer and even the wonderful wizardry of .NET and Visual Studio covers only a portion of this challenging issue. Below I present my initial thoughts both on what is available and what should be available from .NET on a WinForm, focusing on good usability. That is, aspects of an application exposed to the user and making the user experience easier and/or better. (I do include a couple items not visible to the user because I feel strongly about them, such as diagnostics.) I invite you to contribute to these lists. LIST A: Components provided by .NET These are substantially complete components provided by .NET, i.e. those requiring at most trivial coding to use. “About” dialog -- add it with a couple clicks then customize. Persist settings across invocations -- .NET has the support; just use a few lines of code to glue them together. Migrate settings with a new version -- a powerful one, available with one line of code. Tooltips (and infotips) -- .NET includes just plain text tooltips; third-party libraries provide richer ones. Diagnostic support -- TraceSources, TraceListeners, and more are built-in. Internationalization -- support for tailoring your app to languages other than your own. LIST B: Components not provided by .NET These are not supplied at all by .NET or supplied only as rudimentary elements requiring substantial work to be realized. Splash screen -- a small window present during program startup with your logo, loading messages, etc. Tip of the day -- a mini-tutorial presented one bit at a time each time the user starts your app. Check for available updates -- facility to query a server to see if the user is running the latest version of your app, then provide a simple way to upgrade if a new version is found. Maximize to multiple monitors -- the canonical window allows you to maximize to a single monitor only; in my apps I allow maximizing across multiple monitors with a click. Taskbar notifier -- flash the taskbar when your backgrounded app has new info for the user. Options dialogs -- multi-page dialogs letting the user customize the app settings to his/her own preferences. Progress indicator -- for long running operations give the user feedback on how far there is left to go. Memory gauge -- an indicator (either absolute or percentage) of how much memory is used by your app. LIST C: Stylistic and/or tiny bits of functionality This list includes bits of functionality that are too tiny to merit being called a component, along with stylistic concerns (that admittedly do overlap with the Windows User Experience Interaction Guidelines). Design a form for resizing -- unless you are restricting your form to be a fixed size, use anchors and docking so that it does what is reasonable when enlarged or shrunk by the user. Set tab order on a form -- repeated tab presses by the user should advance from field to field in a logical order rather than the default order in which you added fields. Adjust controls to be aware of operating modes -- When starting a background operation with, for example, a “Go” button, disable that “Go” button until the operation completes. Provide access keys for all menu items (per UXGuide). Provide shortcut keys for commonly used menu items (per UXGuide). Set up some (global or important or common) shortcut keys without associating to menu items. Allow some menu items to be invoked with or without modifier keys (shift, control, alt) where the modifier key is useful to vary the operation slightly. Hook up Escape and Enter on child forms to do what is reasonable. Decorate any library classes with documentation-comments and attributes -- this allows Visual Studio to leverage them for Intellisense and property descriptions. Spell check your code! What else would you include?

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • referencing part of the composite primary key

    - by Zavael
    I have problems with setting the reference on database table. I have following structure: CREATE TABLE club( id INTEGER NOT NULL, name_short VARCHAR(30), name_full VARCHAR(70) NOT NULL ); CREATE UNIQUE INDEX club_uix ON club(id); ALTER TABLE club ADD CONSTRAINT club_pk PRIMARY KEY (id); CREATE TABLE team( id INTEGER NOT NULL, club_id INTEGER NOT NULL, team_name VARCHAR(30) ); CREATE UNIQUE INDEX team_uix ON team(id, club_id); ALTER TABLE team ADD CONSTRAINT team_pk PRIMARY KEY (id, club_id); ALTER TABLE team ADD FOREIGN KEY (club_id) REFERENCES club(id); CREATE TABLE person( id INTEGER NOT NULL, first_name VARCHAR(20), last_name VARCHAR(20) NOT NULL ); CREATE UNIQUE INDEX person_uix ON person(id); ALTER TABLE person ADD PRIMARY KEY (id); CREATE TABLE contract( person_id INTEGER NOT NULL, club_id INTEGER NOT NULL, wage INTEGER ); CREATE UNIQUE INDEX contract_uix on contract(person_id); ALTER TABLE contract ADD CONSTRAINT contract_pk PRIMARY KEY (person_id); ALTER TABLE contract ADD FOREIGN KEY (club_id) REFERENCES club(id); ALTER TABLE contract ADD FOREIGN KEY (person_id) REFERENCES person(id); CREATE TABLE player( person_id INTEGER NOT NULL, team_id INTEGER, height SMALLINT, weight SMALLINT ); CREATE UNIQUE INDEX player_uix on player(person_id); ALTER TABLE player ADD CONSTRAINT player_pk PRIMARY KEY (person_id); ALTER TABLE player ADD FOREIGN KEY (person_id) REFERENCES person(id); -- ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id); --this is not working It gives me this error: Error code -5529, SQL state 42529: a UNIQUE constraint does not exist on referenced columns: TEAM in statement [ALTER TABLE player ADD FOREIGN KEY (team_id) REFERENCES team(id)] As you can see, team table has composite primary key (club_id + id), the person references club through contract. Person has some common attributes for player and other staff types. One club can have multiple teams. Employed person has to have a contract with a club. Player (is the specification of person) - if emplyed - can be assigned to one of the club's teams. Is there better way to design my structure? I thought about excluding the club_id from team's primary key, but I would like to know if this is the only way. Thanks. UPDATE 1 I would like to have the id as team identification only within the club, so multiple teams can have equal id as long as they belong to different clubs. Is it possible? UPDATE 2 updated the naming convention as adviced by philip Some business rules to better understand the structure: One club can have 1..n teams (Main squad, Reserve squad, Youth squad or Team A, Team B... only team can play match, not club) One team belongs to one club only A player is type of person (other types (staff) are scouts, coaches etc so they do not need to belong to specific team, just to the club, if employed) Person can have 0..1 contract with 1 club (that means he is employed or unemployed) Player (if employed) belongs to one team of the club Now thinking about it - moving team_id from player to contract would solve my problem, and it could hold the condition "Player (if employed) belongs to one team of the club", but it would be redundant for other staff types. What do you think?

    Read the article

  • Do I must expose the aggregate children as public properties to implement the Persistence ignorance?

    - by xuehua
    Hi all, I'm very glad that i found this website recently, I've learned a lot from here. I'm from China, and my English is not so good. But i will try to express myself what i want to say. Recently, I've started learning about Domain Driven Design, and I'm very interested about it. And I plan to develop a Forum website using DDD. After reading lots of threads from here, I understood that persistence ignorance is a good practice. Currently, I have two questions about what I'm thinking for a long time. Should the domain object interact with repository to get/save data? If the domain object doesn't use repository, then how does the Infrastructure layer (like unit of work) know which domain object is new/modified/removed? For the second question. There's an example code: Suppose i have a user class: public class User { public Guid Id { get; set; } public string UserName { get; set; } public string NickName { get; set; } /// <summary> /// A Roles collection which represents the current user's owned roles. /// But here i don't want to use the public property to expose it. /// Instead, i use the below methods to implement. /// </summary> //public IList<Role> Roles { get; set; } private List<Role> roles = new List<Role>(); public IList<Role> GetRoles() { return roles; } public void AddRole(Role role) { roles.Add(role); } public void RemoveRole(Role role) { roles.Remove(role); } } Based on the above User class, suppose i get an user from the IUserRepository, and add an Role for it. IUserRepository userRepository; User user = userRepository.Get(Guid.NewGuid()); user.AddRole(new Role() { Name = "Administrator" }); In this case, i don't know how does the repository or unit of work can know that user has a new role? I think, a real persistence ignorance ORM framework should support POCO, and any changes occurs on the POCO itself, the persistence framework should know automatically. Even if change the object status through the method(AddRole, RemoveRole) like the above example. I know a lot of ORM can automatically persistent the changes if i use the Roles property, but sometimes i don't like this way because of the performance reason. Could anyone give me some ideas for this? Thanks. This is my first question on this site. I hope my English can be understood. Any answers will be very appreciated.

    Read the article

  • &lsquo;Responsive Web Design with Macaw&rsquo;&ndash; free book updated

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/08/12/lsquoresponsive-web-design-with-macawrsquondash-free-book-updated.aspxThe free book on Macaw by Schonne Eldridge that I mentioned before has been updated. If you’re already subscribed you’ll get alerts for updates. If not the book is well worth getting to start you off with Macaw. If you haven’t tried Macaw yet, heck, you’re missing something big. The book is available from http://schonne.com/macaw Macaw itself is at http://macaw.co

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >