Search Results

Search found 15120 results on 605 pages for 'mock driven design'.

Page 96/605 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • What is the best Design/Way to keep user connected ?

    - by Fasih Hansmukh
    Am working on a POC for self learning in which I want to keep my user connected in LIVE pattern. For example, A game in which 4 user can play at a time , here I need to keep this user connected to my game . M not good at Socket type of programming and love to do that in Services way.What i wana know is 'What is the best way of doing this'. According to my initial Brain Storming, I have decided that I will use SilverLight(In Browser Or Out of Browser) as Front end [I have no issue in that]. I m more concern in back end. Either I make an handler or make a WCF service or use full duplex service and use pooling mechanism for that. As a random thought I come up with a Timer type logic that will fire every after 10 seconds at clients end and get status like Is it now Its turn to roll a dice Home many user left (in case if some of them left) What are connected user status in game like there score/points ect and update game view according to this at his end Kindly place your best answers here that will help me to learn this. Regards and thanks in Advance EDIT: Starting Bounty as i need more feedback. FH

    Read the article

  • What design pattern should be used to create an emulator?

    - by Facon
    I have programmed an emulator, but I have some doubts about how to organizate it properly, because, I see that it has some problems about classes connection (CPU <- Machine Board). For example: I/O ports, interruptions, communication between two or more CPU, etc. I need for the emulator to has the best performance and good understanding of the code. PD: Sorry for my bad English.

    Read the article

  • Quality Design for Asynchronous WCF Services Calls in a Middle-Tier and Returning Data to UI Tier

    - by Perplexed
    I have a WPF application with a group of asynchronous WCF service calls all mashed into the code behind, complete with event handlers and everything, that I have to refactor to productionize and maintain. I want to separate concerns here for maintainability and all the other good reasons to do this, but I'm not sure exactly how to achieve this. Anybody have any good ideas on how to do this, or at least some links to put me in the right direction? My thinking: Create an "infrastructure" layer and reference the services there. Move the asynchronous event handlers into this layer. When an update is called, I will bubble up my own event with my own derivation of the EventArgs class that contains the data the UI will need. I'll have a fairly coupled hooking of the UI to the infrastructure layer as it will consume events I fire off upon completion of an asynchronous data call.

    Read the article

  • Algorithm design: can you provide a solution to the multiple knapsack problem?

    - by MalcomTucker
    I am looking for a pseudo-code solution to what is effectively the Multiple Knapsack Problem (optimisation statement is halfway down the page). I think this problem is NP Complete so the solution doesn't need to be optimal, rather if it is fairly efficient and easily implemented that would be good. The problem is this: I have many work items, with each taking a different (but fixed and known) amount of time to complete. I need to divide these work items into groups so as to have the smallest number of groups (ideally), with each group of work items taking no longer than a given total threshold - say 1 hour. I am flexible about the threshold - it doesnt need to be rigidly applied, though should be close. My idea was to allocate work items into bins where each bin represents 90% of the threshold, 80%, 70% and so on. I could then match items that take 90% to those that take 10%, and so on. Any better ideas?

    Read the article

  • Icons: How does a developer with no design skill make his/her application icons look pretty?

    - by Martin
    I probably spend far too much time trying to make my visual interfaces look good, and while I'm pretty adept at finding the right match between usability and style one area I am hopeless at is making nice looking icons. How do you people overcome this (I'm sure common) problem? I'm thinking of things like images on buttons and perhaps most important of all, the actual application icon. Do you rely on third party designers, in or out of house? Or do you know of some hidden website that offers lots of icons for us to use? I've tried Google but I seem to find either expensive packages that are very specific, millions of Star Trek icons or icons that look abysmal at 16x16 which is my preferred size on in-application buttons. Any help/advice appreciated.

    Read the article

  • Please suggest me the best way to design my database.

    - by Raymond Ho
    I have a table named "Pages" and a table named "Categories". Each entry of the table "Pages" is linked to the table "Categories". The "Categories" table have 5 entries, they are: "Car", "Websites", "Technology", "Mobile Phones", and "Interest". So each time I put an entry to the "Pages" table, I need to map it to the "Categories" table so are arranged properly. Here's my table: Pages ______ id [PK] name url Categories ______ id [PK] Categoryname Pages2Categories ______ Pages.id Categories.id So my question is, is this the most efficient way to create this kind of relationships between tables? It seems very amateur

    Read the article

  • Database design (MySql)::should we put html data in text field inside database table or more efficie

    - by meyosef
    Hi, We building big Web Application and we use mysql, we want to make mysql database more fast. Some of us think if we will put message html body inside table and not inside text.txt in will make database heavy and not fast. Thanks, *Part of main table that hold message: option 1:hold html message body inside database message { id (int) subject (varchar) body (text) } option 2: hold html message body inside body1.txt file message { id (int) subject (varchar) file_body_path (varchar) } *

    Read the article

  • database design: table with large amount of columns (50+) or many sub tables with small amount of co

    - by Guillaume
    In our oroject we already have a lots of tables (100+). Some of them contains a lot of columns (50-100) and we are facing the need of adding more columns from time to time. What do you think is best - from maintenance and performance point of view - to split these huge tables in smaller entities or to keep the tables the way they are ? We are using an ORM tools, so we don't need to write custom request.

    Read the article

  • Having to insert a record, then update the same record warrants 1:1 relationship design?

    - by dianovich
    Let's say an Order has many Line items and we're storing the total cost of an order (based on the sum of prices on order lines) in the orders table. -------------- orders -------------- id ref total_cost -------------- -------------- lines -------------- id order_id price -------------- In a simple application, the order and line are created during the same step of the checkout process. So this means INSERT INTO orders .... -- Get ID of inserted order record INSERT into lines VALUES(null, order_id, ...), ... where we get the order ID after creating the order record. The problem I'm having is trying to figure out the best way to store the total cost of an order. I don't want to have to create an order create lines on an order calculate cost on order based on lines then update record created in 1. in orders table This would mean a nullable total_cost field on orders for starters... My solution thus far is to have an order_totals table with a 1:1 relationship to the orders table. But I think it's redundant. Ideally, since everything required to calculate total costs (lines on an order) is in the database, I would work out the value every time I need it, but this is very expensive. What are your thoughts?

    Read the article

  • How to design the application to conform to the n-tier architecture? (Winform sample in .net with li

    - by AlexRednic
    Rather a simple question. But the implications are vast. Over the last few weeks I've been reading a lot of material about n-tier architecture and it's implementation in the .NET world. The problem is I couldn't find a relevant sample for Winforms with Linq (linq is the way to go for BLL right?). How did you guys manage to grasp the n-tier concept? Books, articles, relevant samples etc.

    Read the article

  • Design guide-lines for writing a Typed SQL Statement API ?

    - by this. __curious_geek
    Last night I came up to sometihng intersting while designing my new project that brought me to ask this qustion here. My project is supposed to follow Table Gateway pattern using tradional ADO.Net datasets for data access. I don't want to write plain queries in my data-access classes. So I came up with an idea of writing a parser kindaa api that exposes objects and methods to generate queries on the move based on my domain objects. Later I want this api to hook up to my Business objects and provide Typed SQL generator api right on the business object instances. Any idea or references how can I do this ? This seems very wide to start with that I'm compelled take your opinions here. Does there anything already exists that can do this ?

    Read the article

  • Is there a standard SQL Table design for overriding 'big picture' default values with lower level de

    - by RichardHowells
    Here's an example. Suppose we are trying to calculate a service charge. Say sales in the USA attract a 10 dollar charge, sales in the UK attract a 20 dollar charge So far it's easy - we are starting to imagine a table that lists charges by country. Now lets assume that Alaska and Hawaii are treated as special cases they are both 15 dollars That suggests a table with states, Alaska and Hawaii are charged at 15, but presumably we need 48 (redundant) rows all saying 10. This gives us a maintainance problem, our user only wants to type 10 once NOT 48 times. It does not sit well with the UK either. The UK does not have states. Suppose we throw in another couple of cross cutting rules. If you order by phone there is a 10% supplement on the charge. If you order via the web there is a 10% discount. But for some reason best known to the owners of the business the web/phone supplement/discount are not applied in Hawaii. It seems to me that this is quite a common kind of problem and there is probably a well known arrangement of tables to store the data. Most cases get handled by broad brush answers, but there are some very detailed low level variations that give rise to a huge number of theoretical combinations, most of which are not used.

    Read the article

  • c++ design question: Can i query the base classes to find the number of derived classes satisfying a

    - by vivekeviv
    I have a piece of code like this class Base { public: Base(bool _active) { active = _active; } void Configure(); void Set Active(bool _active); private: bool active; }; class Derived1 : public Base { public: Derived1(bool active):Base(active){} }; similarly Derived 2 and Derived 3 Now if i call derived1Object.Configure, i need to check how many of the derived1Obj, derived2Obj,derived3Obj is active. Should i add this in the "Base" class like a function say, GetNumberOfActive()? And If the implementation is like this: class Imp { public: void Configure() { //Code instantiating a particular Derived1/2/3 Object int GetNumberOfActiveDerivedObj(); baseRef.Configure(int numberOfActiveDerivedClasses); } prive: Derived1 dObj1(true); Derived2 dObj2(false); Derived3 dObj3(true); }; should i calculate the numberOfActive Derived Objects in Imp Class? THanks

    Read the article

  • How to properly design a simple favorites and blocked table?

    - by Nils Riedemann
    Hey, i am currently writing a webapp in rails where users can mark items as favorites and also block them. I came up two ways and wondered which one is more common/better way. 1. Separate join tables Would it be wise to have 2 tables for this? Like: users_favorites - user_id - item_id users_blocked - user_id - item_id 2. single table users_marks (or so) - users_id - item_id - type (["fav", "blk"]) Both ways seem to have advantages. Which one would you use and why?

    Read the article

  • how to design a db like Facebook where users can update their status and of the fb page as admin

    - by Harsha M V
    i am designing a database where users can update status messages of theirs and they can create pages groups like facebook fan page and post status like the admin of the page and not as a user. user(id, name..) group(id, name...) group_admin(group_id, user_id) this is my set up. Is this the way to do it. How to post under the group as an admin. will i need to make a check to every user if he is the admin or not ?

    Read the article

  • What is the proper design of storing temporary users? [closed]

    - by Mendy
    In SO site both real users and temporary users can add a new questions. I assume each user type has a different table. My question is how can I attach the question to the right user? I assuming the temp users have their own table from the following reasons: Temp users don't have all the data that real users have. like: email, password, and all users details. On the other hand, temp users are a lot more then real users. So it make more sense to have they in their own table.

    Read the article

  • How to design a database where the main entity table has 25+ columns but a single entity's columns g

    - by thenextwebguy
    The entities to be stored have 25+ properties (table columns). The entities are pretty diverse, meaning that, most of the columns are empty. On average, I'd say, less than 20% (<5) properties have a value in any particular item. So, I have a lot of redundant empty columns for most of the table rows. Almost all of the columns are decimal numbers. Given this scenario, would you suggest serializing the columns instead, or perhaps, create another table named "Property", which would contain all the possible properties and then creating yet another table "EntityProperty" which would map an property to an entity using foreign keys? Or would you leave it as it is?

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >