Search Results

Search found 24642 results on 986 pages for 'language design'.

Page 135/986 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • Is creating a separate pool for each individual image created from a png appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • Methodologies for Managing Users and Access?

    - by MadBurn
    This is something I'm having a hard time getting my head around. I think I might be making it more complicated than it is. What I'm trying to do is develop a method to store users in a database with varying levels of access, throughout different applications. I know this has been done before, but I don't know where to find how they did it. Here is an example of what I need to accomplish: UserA - Access to App1, App3, App4 & can add new users to App3, but not 4 or 1. UserB - Access to App2 only with ReadOnly access. UserC - Access to App1 & App4 and is able to access Admin settings of both apps. In the past I've just used user groups. However, I'm reaching a phase where I need a bit more control over each individual user's access to certain parts of the different applications. I wish this were as cut and dry as being able to give a user a role and let each role inherit from the last. Now, this is what I need to accomplish. But, I don't know any methods of doing this. I could easily just design something that works, but I know this has been done and I know this has been studied and I know this problem has been solved by much better minds than my own. This is for a web application and using sql server 2008. I don't need to store passwords (LDAP) and the information I need to store is actually very limited. Basically just username and access.

    Read the article

  • Designing business objects, and gui actions

    - by fozz
    Developing a product ordering system using Java SE 6. The previous implementations used combo boxes, text fields, and check boxes. Preforming validation on action events from the GUI. The validation includes limiting existing combo boxes items, or even availability. The issue in the old system was that the action was received and all rules were applied to the entire business object. This resulted in a huge event change as options were changed multiple times. To be honest I have no idea how an infinite loop wasn't produced. Through the next iteration I stepped in and attempted to limit the chaos by controlling the order in which the selections could be made. Making configuration of BO's a top down approach. I implemented custom box models, action events, beans/binding, and an MVC pattern. However I still am unable to fully isolate action even chains. I'm thinking that I've approached the whole concept backwards in an attempt to stay closest to what was already in place. So the question becomes what do I design instead? I'm currently considering an implementation of Interfaces, Beans, Property Change Listeners to manage the back and forth. Other thoughts were validation exceptions, dynamic proxies.... I'm sure there are a ton of different ways. To say that one way is right is crazy, and I'm sure it will take a blending of multiple patterns. My knowledge of swing/awt validation is limited, previously I did backend logic only. Other considerations are were some sort of binding(jgoodies or otherwise) to directly bind GUI state to BO's.

    Read the article

  • Is creating a separate pool for each individual png image in the same class appropriate?

    - by Panzercrisis
    I'm still possibly a little green about object-pooling, and I want to make sure something like this is a sound design pattern before really embarking upon it. Take the following code (which uses the Starling framework in ActionScript 3): [Embed(source = "/../assets/images/game/misc/red_door.png")] private const RED_DOOR:Class; private const RED_DOOR_TEXTURE:Texture = Texture.fromBitmap(new RED_DOOR()); private const m_vRedDoorPool:Vector.<Image> = new Vector.<Image>(50, true); . . . public function produceRedDoor():Image { // get a Red Door image } public function retireRedDoor(pImage:Image):void { // retire a Red Door Image } Except that there are four colors: red, green, blue, and yellow. So now we have a separate pool for each color, a separate produce function for each color, and a separate retire function for each color. Additionally there are several items in the game that follow this 4-color pattern, so for each of them, we have four pools, four produce functions, and four retire functions. There are more colors involved in the images themselves than just their predominant one, so trying to throw all the doors, for instance, in a single pool, and then changing their color properties around isn't going to work. Also the nonexistence of the static keyword is due to its slowness in AS3. Is this the right way to do things?

    Read the article

  • Using PDO with MVC

    - by mister martin
    I asked this question at stackoverflow and received no response (closed as duplicate with no answer). I'm experimenting with OOP and I have the following basic MVC layout: class Model { // do database stuff } class View { public function load($filename, $data = array()) { if(!empty($data)) { extract($data); } require_once('views/header.php'); require_once("views/$filename"); require_once('views/footer.php'); } } class Controller { public $model; public $view; function __construct() { $this->model = new Model(); $this->view = new View(); // determine what page we're on $page = isset($_GET['view']) ? $_GET['view'] : 'home'; $this->display($page); } public function display($page) { switch($page) { case 'home': $this->view->load('home.php'); break; } } } These classes are brought together in my setup file: // start session session_start(); require_once('Model.php'); require_once('View.php'); require_once('Controller.php'); new Controller(); Now where do I place my database connection code and how do I pass the connection onto the model? try { $db = new PDO('mysql:host='.DB_HOST.';dbname='.DB_DATABASE.'', DB_USERNAME, DB_PASSWORD); $db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); } catch(PDOException $err) { die($err->getMessage()); } I've read about Dependency Injection, factories and miscellaneous other design patterns talking about keeping SQL out of the model, but it's all over my head using abstract examples. Can someone please just show me a straight-forward practical example?

    Read the article

  • What type of pattern would be used in this case

    - by Admiral Kunkka
    I want to know how to tackle this type of scenario. We are building a person's background, from scratch, and I want to know, conceptually, how to proceed with a secure object pattern in both design and execution... I've been reading on Factory patterns, Model-View-Controller types, Dependency injection, Singleton approaches... and I can't seem to grasp or 'fit' these types of designs decisions into what I'm trying to do.. First and foremost, I started with having a big jack-of-all-trades class, then I read some more, and some tips were to make sure your classes only have a single purpose.. which makes sense and I started breaking down certain things into other classes. Okay, cool. Now I'm looking at dependency injection and kind of didn't really know what's going on. Example/insight of what kind of heirarchy I need to accomplish... class Person needs to access and build from a multitude of different classes. class Culture needs to access a sub-class for culture benefits class Social needs to access class Culture, and other sub-classes class Birth needs to access Social, Culture, and other sub-classes class Childhood/Adolescence/Adulthood need to access everything. Also, depending on different rolls, this class heirarchy needs to create multiple people as well, such as Family, and their backgrounds using some, if not all, of these same classes. Think of it as a people generator, all random, with backgrounds and things that happen to them. Ageing, death of loved ones, military careers, e.t.c. Most of the generation is done randomly, making calls to a mt_rand function to pick from most of the selections inside the classes, guaranteeing the data to be absolutely random. I have most of the bulk-data down, and was looking for some insight from fellow programmers, what do you think?

    Read the article

  • Are DDD Aggregates really a good idea in a Web Application?

    - by Mystere Man
    I'm diving in to Domain Driven Design and some of the concepts i'm coming across make a lot of sense on the surface, but when I think about them more I have to wonder if that's really a good idea. The concept of Aggregates, for instance makes sense. You create small domains of ownership so that you don't have to deal with the entire domain model. However, when I think about this in the context of a web app, we're frequently hitting the database to pull back small subsets of data. For instance, a page may only list the number of orders, with links to click on to open the order and see its order id's. If i'm understanding Aggregates right, I would typically use the repository pattern to return an OrderAggregate that would contain the members GetAll, GetByID, Delete, and Save. Ok, that sounds good. But... If I call GetAll to list all my order's, it would seem to me that this pattern would require the entire list of aggregate information to be returned, complete orders, order lines, etc... When I only need a small subset of that information (just header information). Am I missing something? Or is there some level of optimization you would use here? I can't imagine that anyone would advocate returning entire aggregates of information when you don't need it. Certainly, one could create methods on your repository like GetOrderHeaders, but that seems to defeat the purpose of using a pattern like repository in the first place. Can anyone clarify this for me?

    Read the article

  • Creating new games on Android and/or iPhone

    - by James Clifton
    I have a succesfull facebook poker game that is running very nicely, now some people have asked if I can port this to other platforms - mainly mobile devices (and I have been asked to make a tablet version, do I really need a seperate version?) I am currently a PHP programmer (and game designer) and I simply dont' have the time to learn Android and other languages - so I have decided to pay third parties to program them (if viable). The information I need to know is what programming language is needed for the following four devices - Android mobile phone, iPhone, iPad and tablets? Can they all run off a central sql database? If they can't then i'm not interested :( Do any of these run FLASH? Have I covered all my main bases here? For example if a person programs for a ANDROID mobile phone is that to much differant to an ANDROID tablet? They will have slightly differant graphics (because the tablet has a greater screen area might as well use it) but do they need to be started from scratch? Same goes for iPhone/iPad, do they really need to be programmed differantly if the only differance is the graphics?

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • How to handle fine grained field-based ACL permissions in a RESTful service?

    - by Jason McClellan
    I've been trying to design a RESTful API and have had most of my questions answered, but there is one aspect of permissions that I'm struggling with. Different roles may have different permissions and different representations of a resource. For example, an Admin or the user himself may see more fields in his own User representation vs another less-privileged user. This is achieved simply by changing the representation on the backend, ie: deciding whether or not to include those fields. Additionally, some actions may be taken on a resource by some users and not by others. This is achieved by deciding whether or not to include those action items as links, eg: edit and delete links. A user who does not have edit permissions will not have an edit link. That covers nearly all of my permission use cases, but there is one that I've not quite figured out. There are some scenarios whereby for a given representation of an object, all fields are visible for two or more roles, but only a subset of those roles my edit certain fields. An example: { "person": { "id": 1, "name": "Bob", "age": 25, "occupation": "software developer", "phone": "555-555-5555", "description": "Could use some sunlight.." } } Given 3 users: an Admin, a regular User, and Bob himself (also a regular User), I need to be able to convey to the front end that: Admins may edit all fields, Bob himself may edit all fields, but a regular User, while they can view all fields, can only edit the description field. I certainly don't want the client to have to make the determination (or even, for that matter, to have any notion of the roles involved) but I do need a way for the backend to convey to the client which fields are editable. I can't simply use a combination of representation (the fields returned for viewing) and links (whether or not an edit link is availble) in this scenario since it's more finely grained. Has anyone solved this elegantly without adding the logic directly to the client?

    Read the article

  • How to Effectively Create Bullet Patterns

    - by SoulBeaver
    I'm currently creating a top-down shooter like Touhou. The most important factor of the game is that there are many diverse patterns and ways at which bullets are generated and shot at the player, see this video: http://www.youtube.com/watch?v=4Nb5Ohbt1Sg#start=0:60;end=9:53; At the moment, I'm using a class "Pattern" which has a series of steps on moving and shooting. However, I feel this method is quite laborous as I have to create a new Pattern for each attack and perhaps new Bullet classes that will implement a certain behavior. This question received a comment suggesting I should look into BulletML for easy creation and storage of bullets with a specific pattern. It looks decent, but it led me to wonder, what other solutions do you have that I should take into consideration? Update My current design is as follows: An example of an implemented pattern: My GigasPattern first executes a teleport which moves Alice to a certain point (X, Y) on the screen. After this is completed, the pattern starts using the Mover to move the sprite around (whereas teleporting has separate effects and animation). These are of no concern, really, as they are quite simple. The Shooter also creates various Attacks, which are classes again that the Shooter can use to create various patterns of bullets, much like the one in the question I posted. Once the Mover has reached it's destination, both it and the shooter stop and return to an inactive state. The pattern completes, is removed by the AI and a new one gets chosen.

    Read the article

  • Necessary Infrastructure for large project with many components communicating through IPCs

    - by jluzwick
    I have a fairly in depth question which probably doesn't have an exact answer. As a software engineer, I am usually tasked with working on a program or project with minimal understanding of how other components or programs in the project interact with each other. When one program fails in a sea of multiple components and processes, what infrastructure elements are necessary to ensure that the problem can be accurately tracked to the violating application? More specifically, what infrastructure elements should be necessary for this large project and which are optional but very helpful. One such example I can think of is some form of a common logging infrastructure that allows for a developer or tester to easily browse through a log that contains numerous components for messages that might allude to the culprit program along with a "trail" of what happened before the issue occurred. I'm thinking of something similar to Androids alogcat tool. These necessary infrastructure elements should be language-agnostic. While these elements should be understood by all engineers on the team in question, which elements should be understood at great detail by the technical system engineers and what should the individual software engineers be responsible for adding to their tools to allow for such infrastructures to take hold? Please feel free to ask for clarification if something does not make sense as I understand this question is very broad and needs some refinement. I will refine as necessary from the answers and comments I receive. Thanks for any help!

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • Is there a common programming term for the problems of adding features to an already-featureful program?

    - by Jeremy Friesner
    I'm looking for a commonly used programming term to describe a software-engineering phenomenon, which (for lack of a better way to describe it) I'll illustrate first with a couple of examples-by-analogy: Scenario 1: We want to build/extend a subway system on the outskirts of a small town in Wyoming. There are the usual subway-problems to solve, of course (hiring the right construction company, choosing the best route, buying the subway cars), but other than that it's pretty straightforward to implement the system because there aren't a huge number of constraints to satisfy. Scenario 2: Same as above, except now we need to build/extend the subway system in downtown Los Angeles. Here we face all of the problems we did in case (1), but also additional problems -- most of the applicable space is already in use, and has a vocal constituency which will protest loudly if we inconvenience them by repurposing, redesigning, or otherwise modifying the infrastructure that they rely on. Because of this, extensions to the system happen either very slowly and expensively, or they don't happen at all. I sometimes see a similar pattern with software development -- adding a new feature to a small/simple program is straightforward, but as the program grows, adding further new features becomes more and more difficult, if only because it is difficult to integrate the new feature without adversely affecting any of the large number of existing use-cases or user-constituencies. (even with a robust, adaptable program design, you run into the problem of the user interface becoming so elaborate that the program becomes difficult to learn or use) Is there a term for this phenomenon?

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • How can we best petition to bring Adobe creative software to Ubuntu?

    - by Sixthlaw
    Now I know its not as simple as asking for Adobe to support their design software on Ubuntu, but is there a way for the community and Canonical to make known to Adobe the rapidly growing amount of Linux users, and their desire for this great set of tools OFFICIALLY? I know that many of the answers I receive might be of the fashion "Its not going to happen", "use the free tools provided" or "Who knows it might happen in the near future", but this IS not what I am looking for. I have noticed, on sites like www.OmgUbuntu.com, there are links to pages where people can "like" the idea. Is there a way to try and get the whole community on board with this one, even Canonical, and as stated above, put forward this proposal to Adobe. The current requests for an Adobe CS, for Linux, are in dribs and drabs scattered all of the internet. Now is the best time to come up with productive solutions on how we can best gather statistics on the amount of people willing to buy the Adobe CS. These are the words of an Adobe employee: "I have forwarded this feedback on to the appropriate team who will consider it for future releases of Adobe software." The larger amount of people we have unified in the ONE community proposal, the greater chance we have of getting the software. How can we make this happen?

    Read the article

  • Finding the best practice for a game simulating tool

    - by Tougheart
    I'm studying Java right now, and I'm thinking of this tool as my practice project. The game is "League of Legends" in case anyone knows it, I'm not actually simulating the game as in simulating game play, I'm just trying to create a tool that can compare different champions to each other based on their own abilities and items bought inside the game. The game basics are: Every player has a champion in a team of 5 players playing against another team. Each champion has a different set of abilities (usually 4) that s/he uses to do damage to opposing champions. Each champion gets stronger by buying different items, increasing the attack it deals or decreasing the damage received. What I want to do is to create a tool to be used outside the game enabling players to try out different builds for their champions and compare the figures against other champions they usually fight against. The goal is to enable players get a deeper understanding of the different item combinations (builds) that can be used during the games, instead of trying them out in real games which can be somehow very time consuming. What I'm stuck at is the best practice I should follow to make this possible using Java, I can't figure out which classes should inherit from which, should I make champions and items specs in the code or extracted from other files, specially that I'm talking about hundreds of items and champions to use in that tool. I'm self studying Java, and I don't have much practice at it, so I would really appreciate any broad guidelines regarding this, and sorry if my question doesn't fit here, I tried to follow the rules. English isn't my native language, so I'm really sorry if I wasn't clear enough, I would be more than happy to explain anything that's not understood.

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • Is wrapping a third party code the only solution to unit test its consumers? [closed]

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Now some people argue that if a library is stable enough and won't change often then there is no need to wrap it. So assuming that Zend_Mail is stable and won't change and it fits my needs entirely, then I won't need a wrapper for it. Now take a look at my class Logger that depends on Zend_Mail: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Now is wrapping Zend_Mail the only solution or is there a better approach to this problem? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • Directx vs XNA - Which is better for me? [closed]

    - by tristo
    Recently I got Visual Studio 2012 from visual studio 2010, although did not expect Visual Studio to 2012 to designed the way it was. Anyway I am pleased with some of VS 2012 technology and have moved all of my projects to it. At this point of time since I got VS 2012 I have been into making windows applications and other non-game activities. ALTHOUGH have recently gotten into the spirit of game development and I am planning to make a 3d comical game, shader effects, not too complicated meshes, but it requires alot of lighting effects to emphasise certain parts of the game. When I was using VS 2010 I had a great time making 2d games with XNA, it uses a great language, and has a very awesome system. But I no longer have XNA with me, and the workarounds described in stackoverflow always gives me errors while using xna. Anyway it seems that microsoft have stuffed themselves up with xna anyway with the weirdness of Windows 8, and it being only avaliabe on pc and xbox. Due to these reasons I have decided to work with Directx and Direct3d to produce my new game, although the overflowing credits after each directx game gives me the shivers, and the low-level coding of directx also puts me on thin ice with my games, left in a confusional mess with what decision I should make. I don't know anything about directx or direct3d. I am an indie developer, but I am planning to take on alot of professional aspects of games. I don't have heaps of time(2-3 hours a day) I don't mind the complexity of how directx works, as long as I can learn how to make the fundementals of a game in a week. I am also unsure if directx is really for my situation, and keep with xna game development. Anyone can tell me the best technology for me would be great.

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Should I think about switching to another platform as a .Net developer? [closed]

    - by A. Karimi
    I’ve been a developer for about 10 years and I’ve almost worked on Microsoft stack. At the last several years I’ve been introduced to some good practices such as IoC and other primary design patterns. Now I feel so much comfortable using these patterns and concepts and I’m very angry why we didn’t do that earlier! They exist and used by many developers since more than 5 years ago but why I and many of my colleagues began using them a little later. As you may know Java developers are more ahead in these fields (concepts, patterns and …) than .Net developers. Am I right? Now the question is, “Why we (as .NET developers) weren’t ahead so much? Isn’t it because we are using Microsoft stack?”. I know ALT.NET but why we are trying make a closed ecosystem open and finding alternatives for Microsoft Echo Chamber, while there are natively open ecosystems like Java!? I've always liked most of the Microsoft works very much but I’m worried about this issue. I am even ask myself should I move to another platform?

    Read the article

  • Correct way to inject dependencies in Business logic service?

    - by Sri Harsha Velicheti
    Currently the structure of my application is as below Web App -- WCF Service (just a facade) -- Business Logic Services -- Repository - Entity Framework Datacontext Now each of my Business logic service is dependent on more than 5 repositories ( I have interfaces defined for all the repos) and I am doing a Constructor injection right now(poor mans DI instead of using a proper IOC as it was determined that it would be a overkill for our project). Repositories have references to EF datacontexts. Now some of the methods in the Business logic service require only one of the 5 repositories, so If I need to call that method I would end up instantiating a Service which will instatiate all 5 repositories which is a waste. An example: public class SomeService : ISomeService { public(IFirstRepository repo1, ISecondRepository repo2, IThirdRepository repo3) {} // My DoSomething method depends only on repo1 and doesn't use repo2 and repo3 public DoSomething() { //uses repo1 to do some stuff, doesn't use repo2 and repo3 } public DoSomething2() { //uses repo2 and repo3 to do something, doesn't require repo1 } public DoSomething3() { //uses repo3 to do something, doesn't require repo1 and repo2 } } Now if my I have to use DoSomething method on SomeService I end up creating both IFirstRepository,ISecondRepository and IThirdRepository but using only IFirstRepository, now this is bugging me, I can seem to accept that I am un-necessarily creating repositories and not using them. Is this a correct design? Are there any better alternatives? Should I be looking at Lazy instantiation Lazy<T> ?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >