Search Results

Search found 31000 results on 1240 pages for 'network design'.

Page 3/1240 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Relative encapsulation design

    - by taher1992
    Let's say I am doing a 2D application with the following design: There is the Level object that manages the world, and there are world objects which are entities inside the Level object. A world object has a location and velocity, as well as size and a texture. However, a world object only exposes get properties. The set properties are private (or protected) and are only available to inherited classes. But of course, Level is responsible for these world objects, and must somehow be able to manipulate at least some of its private setters. But as of now, Level has no access, meaning world objects must change its private setters to public (violating encapsulation). How to tackle this problem? Should I just make everything public? Currently what I'm doing is having a inner class inside game object that does the set work. So when Level needs to update an objects location it goes something like this: void ChangeObject(GameObject targetObject, int newX, int newY){ // targetObject.SetX and targetObject.SetY cannot be set directly var setter = new GameObject.Setter(targetObject); setter.SetX(newX); setter.SetY(newY); } This code feels like overkill, but it doesn't feel right to have everything public so that anything can change an objects location for example.

    Read the article

  • design for supporting entities with images

    - by brainydexter
    I have multiple entities like Hotels, Destination Cities etc which can contain images. The way I have my system setup right now is, I think of all the images belonging to this universal set (a table in the DB contains filePaths to all the images). When I have to add an image to an entity, I see if the entity exists in this universal set of images. If it exists, attach the reference to this image, else create a new image. E.g.: class ImageEntityHibernateDAO { public void addImageToEntity(IContainImage entity, String filePath, String title, String altText) { ImageEntity image = this.getImage(filePath); if (image == null) image = new ImageEntity(filePath, title, altText); getSession().beginTransaction(); entity.getImages().add(image); getSession().getTransaction().commit(); } } My question is: Earlier I had to write this code for each entity (and each entity would have a Set collection). So, instead of re-writing the same code, I created the following interface: public interface IContainImage { Set<ImageEntity> getImages(); } Entities which have image collections also implements IContainImage interface. Now, for any entity that needs to support adding Image functionality, all I have to invoke from the DAO looks something like this: // in DestinationDAO::addImageToDestination { imageDao.addImageToEntity(destination, imageFileName, imageTitle, imageAltText); // in HotelDAO::addImageToHotel { imageDao.addImageToEntity(hotel, imageFileName, imageTitle, imageAltText); It'd be great help if someone can provide me some critique on this design ? Are there any serious flaws that I'm not seeing right away ?

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Design for a plugin based application

    - by Varun Naik
    I am working on application, details of which I cannot discuss here. We have core framework and the rest is designed as plug in. In the core framework we have a domain object. This domain object is updated by the plugins. I have defined an interface in which I have function as DomainObject doProcessing(DomainObject object) My intention here is I pass the domain object, the plug in will update it and return it. This updated object is then passed again to different plugin to be updated. I am not sure if this is a good approach. I don't like passing the DomainObject to plugin. Is there a better way I can achieve this? Should I just request data from plugin and update the domain object myself?

    Read the article

  • Class Design and Structure Online Web Store

    - by Phorce
    I hope I have asked this in the right forum. Basically, we're designing an Online Store and I am designing the class structure for ordering a product and want some clarification on what I have so far: So a customer comes, selects their product, chooses the quantity and selects 'Purchase' (I am using the Facade Pattern - So subsystems execute when this action is performed). My class structure: < Order > < Product > <Customer > There is no inheritance, more Association < Order has < Product , < Customer has < Order . Does this structure look ok? I've noticed that I don't handle the "Quantity" separately, I was just going to add this into the "Product" class, but, do you think it should be a class of it's own? Hope someone can help.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Design for object with optional and modifiable attributtes?

    - by Ikuzen
    I've been using the Builder pattern to create objects with a large number of attributes, where most of them are optional. But up until now, I've defined them as final, as recommended by Joshua Block and other authors, and haven't needed to change their values. I am wondering what should I do though if I need a class with a substantial number of optional but non-final (mutable) attributes? My Builder pattern code looks like this: public class Example { //All possible parameters (optional or not) private final int param1; private final int param2; //Builder class public static class Builder { private final int param1; //Required parameters private int param2 = 0; //Optional parameters - initialized to default //Builder constructor public Builder (int param1) { this.param1 = param1; } //Setter-like methods for optional parameters public Builder param2(int value) { param2 = value; return this; } //build() method public Example build() { return new Example(this); } } //Private constructor private Example(Builder builder) { param1 = builder.param1; param2 = builder.param2; } } Can I just remove the final keyword from the declaration to be able to access the attributes externally (through normal setters, for example)? Or is there a creational pattern that allows optional but non-final attributes that would be better suited in this case?

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Learning how to design knowledge and data flow [closed]

    - by max
    In designing software, I spend a lot of time deciding how the knowledge (algorithms / business logic) and data should be allocated between different entities; that is, which object should know what. I am asking for advice about books, articles, presentations, classes, or other resources that would help me learn how to do it better. I code primarily in Python, but my question is not really language-specific; even if some of the insights I learn don't work in Python, that's fine. I'll give a couple examples to clarify what I mean. Example 1 I want to perform some computation. As a user, I will need to provide parameters to do the computation. I can have all those parameters sent to the "main" object, which then uses them to create other objects as needed. Or I can create one "main" object, as well as several additional objects; the additional objects would then be sent to the "main" object as parameters. What factors should I consider to make this choice? Example 2 Let's say I have a few objects of type A that can perform a certain computation. The main computation often involves using an object of type B that performs some interim computation. I can either "teach" A instances what exact parameters to pass to B instances (i.e., make B "dumb"); or I can "teach" B instances to figure out what needs to be done when looking at an A instance (i.e., make B "smart"). What should I think about when I'm making this choice?

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

  • How bad is it to have two methods with the same name but different signatures in two classes?

    - by Super User
    I have a design problem related to a public interface, the names of methods, and the understanding of my API and code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class has one public method named collision, and the second has one private method called _collision. The two methods differs in argument type and number. As an example let's say that _collision checks if the object is colliding with another object with certain conditions l, r, t, b (collide on the left side, right side, etc) and returns true or false. The public collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think it's better to avoid overloading the design with different names for methods that do almost the same thing, but in distinct contexts and classes. Is this clear enough to the reader or I should change the method's name?

    Read the article

  • How bad it's have two methods with the same name but differents signatures in two classes?

    - by Super User
    I have a design problem relationated with the public interface, the names of methods and the understanding of my API and my code. I have two classes like this: class A: ... function collision(self): .... ... class B: .... function _collision(self, another_object, l, r, t, b): .... The first class have one public method named collision and the second have one private method called _collision. The two methods differs in arguments type and number. In the API _m method is private. For the example let's say that the _collision method checks if the object is colliding with another_ object with certain conditions l, r, t, b (for example, collide the left side, the right side, etc) and returns true or false according to the case. The collision method, on the other hand, resolves all the collisions of the object with other objects. The two methods have the same name because I think is better avoid overload the design with different names for methods who do almost the same think, but in distinct contexts and classes. This is clear enough to the reader or I should change the method's name?

    Read the article

  • Templates for forms, tabs etc? - Patterntap alternatives

    - by Marco Demaio
    I used to find http://www.patterntap.com quite useful to get design inspiration for forms, tabs, and other web elements etc. Unfortunately after the ZURB acquisition of Patterntap now they enforce you to sign in with your Twitter account in order to simply view larger images of patterns provided by the crowd. So in some way it's not free anymore. Do you know of alternatives to patterntap that are free and you are not obliged to sign in?

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • How to send raw data over a network?

    - by youllknow
    Hi everyone! I've same data stored in a byte-array. The data contains a IPv4 packet (which contains a udp-packet). I want to send these array raw over the network using C# (preferred) or C++. I don't want to use C#'s udp-client for example. Does anyone know how to perform this? Sorry for my bad English and thanks for your help in advance!!!

    Read the article

  • In database table design, how does "Virtual Goods" affect table design -- should we create an instan

    - by Jian Lin
    When we design a database table for a DVD rental company, we actually have a movie, which is an abstract idea, and a physical DVD, so for each rental, we have a many-to-many table with fields such as: TransactionID UserID DvdID RentedDate RentalDuration AmountPaid but what about with virtual goods? For example, if we let a user rent a movie online for 3 days, we don't actually have a DVD, so we may have a table: TransactionID UserID MovieID RentedDate RentalDuration AmountPaid should we create a record for each instance of "virtual good"? For example, what if this virtual good (the movie) can be authorized to be watched on 3 devices (with 3 device IDs), then should we then create a virtual good record in the VirtualGoods table, each with a VirtualGoodID and then another table that has VirtualGoodID DeviceID to match up the movie with the DeviceIDs? We can also just use the TransactionID as the VirtualGoodID. Are there circumstances where we may want to create a record to record this "virtual good" in a VirtualGoods table?

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • Multiplayer / Networking options for a 2D game with physics

    - by lahmas
    Summary: My 50% finished 2D sidescroller with Box2D as physics engine should have multiplayer support in the final version. However, the current code is just a singleplayer game. What should I do now? And more important, how should I implement multiplayer and combine it with singleplayer? Is it a bad idea to code the singleplayer mode separated from multiplayer mode (like Notch did it with Minecraft)? The performance in singleplayer should be as good as possible (Simulating physics with using a loopback server to implement singleplayer mode would be a problem there) Full background / questions: I'm working on a relatively large 2D game project in C++, with physics as a core element of it. (I use Box2D for that) The finished game should have full multiplayer support, however I made the mistake that I didn't plan the networking part properly and basically worked on a singleplayer game until now. I thought that multiplayer support could be added to the almost finished singleplayer game in a relatively easy and clear way, but apparently, from what I have read this is wrong. I even read that a multiplayer game should be programmed as one from the beginning, with the singleplayer mode actually just consisting of hosting an invisible local server and connecting to it via loopback. (I found out that most FPS game engines do it that way, an example would be Source) So here I am, with my half finished 2D sidescroller game, and I don't really know how to go on. Simply continueing to work on the singleplayer / client seems useless to me now, as I'd have to recode and refactor even more later. First, a general question to anybody who possibly found himself in a situation like this: How should I proceed? Then, the more specific one - I have been trying to find out how I can approach the networking part for my game: (Possible solutions:) Invisible / loopback server for singleplayer This would have the advantage that there basically is no difference between singleplayer and multiplayer mode. Not much additional code would be needed. A big disadvantage: Performance and other limitations in singleplayer. There would be two physics simulations running. One for the client and one for the loopback server. Even if you work around by providing a direct path for the data from the loopback server, through direct communcation by the threads for example, the singleplayer would be limited. This is a problem because people should be allowed to play around with masses of objects at once. Separated singleplayer / Multiplayer mode There would be no server involved in singleplayer mode. I'm not really sure how this would work. But at least I think that there would be a lot of additional work, because all of the singleplayer features would have to be re-implemented or glued to multiplayer mode. Multiplayer mode as a module for singleplayer This is merely a quick thought I had. Multiplayer could consist of a singleplayer game, with an additional networking module loaded and connected to a server, which sends and receives data and updates the singleplayer world. In the retrospective, I regret not having planned the multiplayer mode earlier. I'm really stuck at this point and I hope that somebody here is able to help me!

    Read the article

  • Advice on developing a social network [on hold]

    - by Siraj Mansour
    I am doing research on assembling a team, using the right tools, and the cost to develop a highly responsive social network that is capable of dealing with a lot of users. Similar to the Facebook concept but using the basics package for now. Profile, friends, posts, updates, media upload/download, streaming, chat and Inbox messaging are all in the package. We certainly do not expect it to be as popular as Facebook or handle the same number of users and requests, but in its own game it has to be a monster, and expandable for later on. Neglecting the hosting, and servers part, i am looking for technical advise and opinions, on what kind of team i need ? how many developers ? their expertise ? What are the right tools ? languages ? frameworks ? environments ? Any random ideas about the infrastructure ? Quick thoughts on the development process ? Please use references, if you have any to support your ideas. Development cost mere estimation ? NEGLECTING THE COST OF SERVERS I know my question is too broad but my knowledge is very limited and i need detailed help, for any help you can offer i thank you in advance.

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

  • How do we keep dependent data structures up to date?

    - by Geo
    Suppose you have a parse tree, an abstract syntax tree, and a control flow graph, each one logically derived from the one before. In principle it is easy to construct each graph given the parse tree, but how can we manage the complexity of updating the graphs when the parse tree is modified? We know exactly how the tree has been modified, but how can the change be propagated to the other trees in a way that doesn't become difficult to manage? Naturally the dependent graph can be updated by simply reconstructing it from scratch every time the first graph changes, but then there would be no way of knowing the details of the changes in the dependent graph. I currently have four ways to attempt to solve this problem, but each one has difficulties. Nodes of the dependent tree each observe the relevant nodes of the original tree, updating themselves and the observer lists of original tree nodes as necessary. The conceptual complexity of this can become daunting. Each node of the original tree has a list of the dependent tree nodes that specifically depend upon it, and when the node changes it sets a flag on the dependent nodes to mark them as dirty, including the parents of the dependent nodes all the way down to the root. After each change we run an algorithm that is much like the algorithm for constructing the dependent graph from scratch, but it skips over any clean node and reconstructs each dirty node, keeping track of whether the reconstructed node is actually different from the dirty node. This can also get tricky. We can represent the logical connection between the original graph and the dependent graph as a data structure, like a list of constraints, perhaps designed using a declarative language. When the original graph changes we need only scan the list to discover which constraints are violated and how the dependent tree needs to change to correct the violation, all encoded as data. We can reconstruct the dependent graph from scratch as though there were no existing dependent graph, and then compare the existing graph and the new graph to discover how it has changed. I'm sure this is the easiest way because I know there are algorithms available for detecting differences, but they are all quite computationally expensive and in principle it seems unnecessary so I'm deliberately avoiding this option. What is the right way to deal with these sorts of problems? Surely there must be a design pattern that makes this whole thing almost easy. It would be nice to have a good solution for every problem of this general description. Does this class of problem have a name?

    Read the article

  • Network / Internet diagnostic tool to locate an error?

    - by Jesper
    Hi, I’m facing some difficulties with the Internet / network at my work and I have trouble locating the precise error and when and how it occurs. The problem is that the client machines in the house sporadic is disconnected to the Internet. I’m some what new so I haven’t that much inside in the network and apparently neither has my predecessor. What I am requesting and hoping you guys knows about is, if there exist some kind of network monitor tool I can install and run and it will periodically check the network, the Internet connection etc. and record to logs. Then, if there suddenly arises a problem some time of the day in some part of the network or the Internet connection, I can check it perhaps the next day. I’ve just downloaded and installed Microsoft Network Monitor 3.3 application and hopeful it can give me some answers on where the instability is located but I still would like a tool to make different checks and test in some time interval. Do anyone know about such a program or another kind of performance / diagnostic tool / method I can use? Sincere Jesper

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • Bridging: Loosing WLAN network connection with 4addr on option - Why?

    - by WitchCraft
    Question: For use with my Xen VM, I need to create a virtual network interface (vif) that is bridged to wlan0. If in /etc/network/interfaces I add auto xenbr0 iface xenbr0 inet dhcp And then later do brctl addif xenbr0 wlan0 I get this error message. can't add wlan0 to bridge xenbr0: Operation not supported I found out that Linux won't let you bridge a wireless interface in managed mode at all unless you enable the 4addr option (needed to recompile iw): iw dev wlan0 set 4addr on Afterwards brctl addif xenbr0 wlan0 works, and brctl show shows xenbr0 as bridged to wlan0. Unfortunately, as soon as I execute iw dev wlan0 set 4addr on my entire network connection is gone (no connection). As soon as then I execute iw dev wlan0 set 4addr off I reconnect and it works again. If I re-execute 4addr on, it breaks again, if I execute 4addr off, it works again. Unfortunately, I can't just turn 4addr on, activate the bridge and then turn it back off (error: device not ready). Does anybody know why I loose my connection ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >