Search Results

Search found 53262 results on 2131 pages for 'application architecture'.

Page 54/2131 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Mapping between 4+1 architectural view model & UML

    - by Sadeq Dousti
    I'm a bit confused about how the 4+1 architectural view model maps to UML. Wikipedia gives the following mapping: Logical view: Class diagram, Communication diagram, Sequence diagram. Development view: Component diagram, Package diagram Process view: Activity diagram Physical view: Deployment diagram Scenarios: Use-case diagram The paper Role of UML Sequence Diagram Constructs in Object Lifecycle Concept gives the following mapping: Logical view (class diagram (CD), object diagram (OD), sequence diagram (SD), collaboration diagram (COD), state chart diagram (SCD), activity diagram (AD)) Development view (package diagram, component diagram), Process view (use case diagram, CD, OD, SD, COD, SCD, AD), Physical view (deployment diagram), and Use case view (use case diagram, OD, SD, COD, SCD, AD) which combines the four mentioned above. The web page UML 4+1 View Materials presents the following mapping: Finally, the white paper Applying 4+1 View Architecture with UML 2 gives yet another mapping: Logical view class diagrams, object diagrams, state charts, and composite structures Process view sequence diagrams, communication diagrams, activity diagrams, timing diagrams, interaction overview diagrams Development view component diagrams Physical view deployment diagram Use case view use case diagram, activity diagrams I'm sure further search will reveal other mappings as well. While various people usually have different perspectives, I don't see why this is the case here. Specially, each UML diagram describes the system from a particular aspect. So, for instance, why the "sequence diagram" is considered as describing the "logical view" of the system by one author, while another author considers it as describing the "process view"? Could you please help me clarify the confusion?

    Read the article

  • A Primer on Migrating Oracle Applications to a New Platform

    - by Nick Quarmby
    In Support we field a lot of questions about the migration of Oracle Applications to different platforms.  This article describes the techniques available for migrating an Oracle Applications environment to a new platform and discusses some of the common questions that arise during migration.  This subject has been frequently discussed in previous blog articles but there still seems to be a gap regarding the type of questions we are frequently asked in Service Requests. Some of the questions we see are quite abstract. Customers simply want to get a grip on understanding how they approach a migration. Others want to know if a particular architecture is viable. Other customers ask about mixing different platforms within a single Oracle Applications environment.    Just to clarify, throughout this article, the term 'platform' refers specifically to operating systems and not to the underlying hardware. For a clear definition of 'platform' in the context of Oracle Applications Support then Terri's very timely article:Oracle E-Business Suite Platform SmörgåsbordThe migration process is very similar for both 11i and R12 so this article only mentions specific differences where relevant.

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Is Test Driven Development viable in game development?

    - by Will Marcouiller
    As being Scrum certified, I tend to prone for Agile methodologies while developping a system, and even use some canvas from the Scrum framework to manage my day-to-day work. Besides, I am wondering whether TDD is an option in game development, if it is viable? If I believe this GD question, TDD is not much of a use in game development. Why are MVC & TDD not employed more in game architecture? I come from industrial programming where big projects with big budgets need to work flawlessly, as it could result to catastrophic scenarios if the code wasn't throroughly tested inside and out. Plus, following Scrum rules encourages meeting the due dates of your work while every single action in Scrum is time-boxed! So, I agree when in the question linked above they say to stop trying to build a system, and start writing the game. It is quite what Scrum says, try not to build the perfect system, first: make it work by the Sprint end. Then, refactor the code while working in the second Sprint if needed! I understand that if not all departments responsible for the game development use Scrum, Scrum becomes useless. But let's consider for a moment that all the departments do use Scrum... I think that TDD would be good to write bug-free code, though you do not want to write the "perfect" system/game. So my question is the following: Is TDD viable in game development anyhow?

    Read the article

  • Tidbits of goodness - Podcasts, REST, JSON

    - by jeff.x.davies
    I've been quiet for a while, busy with a variety of projects. I did want to let you all know about a couple of things going on. First, I have been participating in architectural podcasts with Bob Rhubart. If you are interested in hearing these short (about 10 minutes each) recordings where a group of us discuss enterprise architecture and its future, check out http://blogs.oracle.com/archbeat/2010/05/podcast_show_notes_evolving_en.html Next, I have been working on the public sample code for the Oracle Service Bus 11g release. I'm now expanding my samples to include SCA, BPEL and the Oracle Adapters. This is really great experience for me because I have been learning these other tools to a deeper level and this provides insight into developing better solutions. You know the old saying, "If the only tool you have is a hammer, you tend to appraoch every problem as if it were a nail." However, I'm not the only one working on these samples. We have alot of our best and brightest working on sample code for the 11g release. Take a look at https://soasamples.samplecode.oracle.com/ to see all of the samples for SOA Suite 11g A reader wrote to me and asked me about using OSB to return information in JSON format. I don't have a sample posted for this yet, but I am working on getting one packaged up. In the mean time I can tell you that it is dead simple to do in OSB. Use the instructions I gave in an earlier blog entry on creating REST services using OSB, specify Messaging Service as the service type that takes a Text message and returns a Text message. Then have the OSB proxy service return a JSON formatted string (by replacing the contents of the $body variable with the JSON text) and you're done! This approach allows you to use OSB services from within Javascript/AJAX seamlessly. As I get more samples posted to the OTN site, I'll let you know. I have lots of interesting stuff on the way.

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Architectural Composition Languages

    - by C. Lawrence Wenham
    Recently stumbled upon this paper (PDF) talking about ACLs, or Architectural Composition Languages. They're a fusion of two earlier lines of research: Architectural Definition Languages (such as UML) and Object Composition Languages (such as XAML, WWF, or scripting languages). The goal of an ACL is to have a high-level description of a program's architecture which can also be compiled into a runnable program. The high-level description assists automated analysis, while the 'executability' means changes can be tested immediately. You would still author the components of the program in a conventional programming language (C, Java, Python, etc), but they would be composed into a complete program by the ACL. One of the expected benefits is that a program can be ported to a different platform by swapping in "similar but different" components. I've been hankering for something like this for a long time (see this answer I gave on a StackOverflow question a few years ago). The paper mentions that the researchers were working on a language called ACL/1 that initially targeted Java, but would be ported to support .Net as well. However, I can't find any more mention of ACL/1 anywhere. Has there been any more work done on this? Are there any other implementations of the ACL concept that are available for use or experimentation?

    Read the article

  • Microkernel architectural pattern and applicability for business applications

    - by Pangea
    We are in the business of building customizable web applications. We have the core team that provides what we call as the core platform (provides services like security, billing etc.) on top of which core products are built. These core products are industry specific solutions like telecom, utility etc. These core products are later used by other teams to build customer specific solutions in a particular industry. Until now we have a loose separation between platform and core product. The customer specific solutions are build by customizing 20-40% of the core offering and re-packaging. The core-platform and core products are released together as monolithic apps (ear). I am looking to improvise the current situation so that there is a cleaner separation on these 3. This allows us to have evolve each of these 3 separately etc. I've read through the Mircokernel architecture and kind of felt that I can take apply the principles in my context. But most of my reading about this pattern is always in the context of operating systems or application servers etc. I am wondering if there are any examples on how that pattern was used for architecting business applications. Or you could provide some insight on how to apply that pattern to my problem.

    Read the article

  • The best way to structure/design game code

    - by Edward
    My question is quite broad and related to the 2D game code design/architecture/structure. Usually the main game consists of the main loop where you update & render your world states. However, it's recommended for many purposes to separate rendering from the game-logic and so on. I am kinda confused about the whole situation. Many game engines/libs/sdks don't follow separation schema. They propagate a way where you define some scenes/stages and they contain some objects and the scene/stage controls the user input and so on. For example, in cocos2d(-x) and libgdx (stage2d) the games are usually done the way that the update logic happens at the same time/place as rendering. Also, the propagated way is to have a structure where an object knows how to draw itself - which is not a separation of updating & rendering. The same with Flash based games, they are usually done the way when an object (class) contains a swf or a texture and some data and holds some update logic itself, or updated from main Scene. And again this object already knows how to draw itself via "addChild". Also, some people recommend to use MVC pattern, which will require to completely obey the structure of those engines/libs/sdks. Maybe I am overthinking everything, but I am totally confused. I would be grateful if somebody could point me to a correct direction with the game code structures. What is your way of doing things in libgdx/cocos2d/flash?

    Read the article

  • Securing credentials passed to web service

    - by Greg Smith
    I'm attempting to design a single sign on system for use in a distributed architecture. Specifically, I must provide a way for a client website (that is, a website on a different domain/server/network) to allow users to register accounts on my central system. So, when the user takes an action on a client website, and that action is deemed to require an account, the client will produce a page (on their site/domain) where the user can register for a new account by providing an email and password. The client must then send this information to a web service, which will register the account and return some session token type value. The client will need to hash the password before sending it across the wire, and the webservice will require https, but this doesn't feel like it's safe enough and I need some advice on how I can implement this in the most secure way possible. A few other bits of relevant information: Ideally we'd prefer not to share any code with the client We've considered just redirecting the user to a secure page on the same server as the webservice, but this is likely to be rejected for non-technical reasons. We almost certainaly need to salt the password before hashing and passing it over, but that requires the client to either a) generate the salt and communicate it to us, or b) come and ask us for the salt - both feel dirty. Any help or advice is most appreciated.

    Read the article

  • UML Diagrams of Multi-Threaded Applications

    - by PersonalNexus
    For single-threaded applications I like to use class diagrams to get an overview of the architecture of that application. This type of diagram, however, hasn’t been very helpful when trying to understand heavily multi-threaded/concurrent applications, for instance because different instances of a class "live" on different threads (meaning accessing an instance is save only from the one thread it lives on). Consequently, associations between classes don’t necessarily mean that I can call methods on those objects, but instead I have to make that call on the target object's thread. Most literature I have dug up on the topic such as Designing Concurrent, Distributed, and Real-Time Applications with UML by Hassan Gomaa had some nice ideas, such as drawing thread boundaries into object diagrams, but overall seemed a bit too academic and wordy to be really useful. I don’t want to use these diagrams as a high-level view of the problem domain, but rather as a detailed description of my classes/objects, their interactions and the limitations due to thread-boundaries I mentioned above. I would therefore like to know: What types of diagrams have you found to be most helpful in understanding multi-threaded applications? Are there any extensions to classic UML that take into account the peculiarities of multi-threaded applications, e.g. through annotations illustrating that some objects might live in a certain thread while others have no thread-affinity; some fields of an object may be read from any thread, but written to only from one; some methods are synchronous and return a result while others are asynchronous that get requests queued up and return results for instance via a callback on a different thread.

    Read the article

  • Processing component pools problem - Entity Subsystem

    - by mani3xis
    Architecture description I'm creating (designing) an entity system and I ran into many problems. I'm trying to keep it Data-Oriented and efficient as much as possible. My components are POD structures (array of bytes to be precise) allocated in homogeneous pools. Each pool has a ComponentDescriptor - it just contains component name, field types and field names. Entity is just a pointer to array of components (where address acts like an entity ID). EntityPrototype contains entity name and array of component names. Finally Subsystem (System or Processor) which works on component pools. Actual problem The problem is that some components dependents on others (Model, Sprite, PhysicalBody, Animation depends on Transform component) which makes a lot of problems when it comes to processing them. For example, lets define some entities using [S]prite, [P]hysicalBody and [H]ealth: Tank: Transform, Sprite, PhysicalBody BgTree: Transform, Sprite House: Transform, Sprite, Health and create 4 Tanks, 5 BgTrees and 2 Houses and my pools will look like: TTTTTTTTTTT // Transform pool SSSSSSSSSSS // Sprite pool PPPP // PhysicalBody pool HH // Health component There is no way to process them using indices. I spend 3 days working on it and I still don't have any ideas. In previous designs TransformComponent was bound to the entity - but it wasn't a good idea. Can you give me some advices how to process them? Or maybe I should change the overall design? Maybe I should create pools of entites (pools of component pools) - but I guess it will be a nightmare for CPU caches. Thanks

    Read the article

  • How should I host our scalable worker processes?

    - by Pieter Breed
    We are designing a new architecture for an enterprise business. The principles we've followed so far is not to develop what you can (possible buy and) deploy, ie, don't reinvent any wheels. In this way we've decided on CQRS, RabbitMQ, Riak and a bunch of other things. We still need to write /some/ business code though and these will be in the form of worker processes, which will consume commands from a message queue and after any side-effects, produce events onto another message queue. The idea behind this is that via the competing-consumers design we will have a scalable design right out of the box. One option is of writing a management infrastructure that will know how to: deploy code instantiate processes kill processes update configuration etc IE provide fault tolerance and scalability. Also, this is exactly what something like GAE and Heroku does for you, but in a public setting and in our organization, public is bad. My question is, is there an out-of-the-box solution that we can use to host our consumers in? Like a private cloud or private platform-as-a-service. Private Heroku or GAE. Is there some kind of software or software product with which we can do all of these things and thereby get scalability and fault tolerance over our consumers?

    Read the article

  • CQRS applicability when some commands need to block the UI

    - by regularfry
    I am working on an app which I would dearly love to transition from a fairly traditional layered architecture to CQRS, for a number of reasons, not least fo which is that having a robust event log will make adding a couple of feature requests I can see barrelling towards me trivial to accomodate. Now, I have a conceptual problem: of around 40 commands the user can initiate, there are three which the user needs to be sure have successfully completed before the UI lets them do anything else. Everything else fits into the "submit a request, query for success later" model, except for these three commands. How is this handled in CQRS-land? Do I separate the three blocking commands to effectively a third service, so I have Commands, Queries, and BlockingCommands? Do I have a two-stage event processor with an in-request blocking first stage which only gets used for the blocking commands? Does the existence of these three commands mean that the whole idea of applying CQRS is invalid? Should I just pretend they aren't blocking and poll for success in the UI? I'm sure this must come up on other projects, how is it usually handled?

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • How do you keep SOA DRY?

    - by TaylorOtwell
    In our organization, we've shifted to a more "service oriented architecture". To give an example, let's assume we need to retrieve a "Quote" object. This quote has a shipper, a consignee, phone numbers, contacts, email addresses, and other location information. In other words, a Quote object is made up of many other objects. So, it seems like it would make sense to make a "Quote Retrieval Service". In our situation, we've accomplished this by creating a .NET solution and writing the service. The service API looks something like this (in pseudo-code): Function GetQuote(String ID) Returns Quote So, so far so good. Now, when this service is consumed, to keep things "de-coupled", we are creating essentially a duplicate of the Quote object and mapping from the QuoteService version of the Quote into the consumer's version of the Quote. In many cases, these classes will have the exact same properties. So, if the Quote service is consumed by 5 other applications, we would have 6 definitions of what a "Quote" is. One for each consumer, and one for the service. This feels wrong. I thought code was supposed to be DRY, but it seems like our method of SOA is forcing us to create tons of duplicated class definitions. What are we doing wrong, or is the code duplication just a "necessary evil" of SOA?

    Read the article

  • Avoid overwriting all the methods in the child class

    - by Heckel
    The context I am making a game in C++ using SFML. I have a class that controls what is displayed on the screen (manager on the image below). It has a list of all the things to draw like images, text, etc. To be able to store them in one list I created a Drawable class from which all the other drawable class inherit. The image below represents how I would organize each class. Drawable has a virtual method Draw that will be called by the manager. Image and Text overwrite this method. My problem is that I would like Image::draw method to work for Circle, Polygon, etc. since sf::CircleShape and sf::ConvexShape inherit from sf::Shape. I thought of two ways to do that. My first idea would be for Image to have a pointer on sf::Shape, and the subclasses would make it point onto their sf::CircleShape or sf::ConvexShape classes (Like on the image below). In the Polygon constructor I would write something like ptr_shape = &polygon_shape; This doesn't look very elegant because I have two variables that are, in fact, just one. My second idea is to store the sf::CircleShape and sf::ConvexShape inside the ptr_shape like ptr_shape = new sf::ConvexShape(...); and to use a function that is only in ConvexShape I would cast it like so ((sf::ConvexShape*)ptr_shape)->convex_method(); But that doesn't look very elegant either. I am not even sure I am allowed to do that. My question I added details about the whole thing because I thought that maybe my whole architecture was wrong. I would like to know how I could design my program to be safe without overwriting all the Image methods. I apologize if this question has already been asked; I have no idea what to google.

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Use a SQL Database for a Desktop Game

    - by sharethis
    Developing a Game Engine I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL. Data Centered Design Decision My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data. The question is about the data manager. Whether to Use a Relational Database Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory. Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have? Advantages Would Be The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • MVVM and service pattern

    - by alfa-alfa
    I'm building a WPF application using the MVVM pattern. Right now, my viewmodels calls the service layer to retrieve models (how is not relevant to the viewmodel) and convert them to viewmodels. I'm using constructor injection to pass the service required to the viewmodel. It's easily testable and works well for viewmodels with few dependencies, but as soon as I try to create viewModels for complex models, I have a constructor with a LOT of services injected in it (one to retrieve each dependencies and a list of all available values to bind to an itemsSource for example). I'm wondering how to handle multiple services like that and still have a viewmodel that I can unit test easily. I'm thinking of a few solutions: Creating a services singleton (IServices) containing all the available services as interfaces. Example: Services.Current.XXXService.Retrieve(), Services.Current.YYYService.Retrieve(). That way, I don't have a huge constructor with a ton of services parameters in them. Creating a facade for the services used by the viewModel and passing this object in the ctor of my viewmodel. But then, I'll have to create a facade for each of my complexe viewmodels, and it might be a bit much... What do you think is the "right" way to implement this kind of architecture ?

    Read the article

  • decouple software components via nameconvention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Implementing Explosions

    - by Xkynar
    I want to add explosions to my 2D game, but im having a hard time with the architecture. Several game elements might be responsible for explosions, like, lets say, explosive barrels and bullets (and there might be chain reactions with close barrels). The only options i can come up with are: 1 - Having an array of explosions and treat them as a game element as important as any other Pros: Having a single array which is updated and drawn with all the other game element arrays makes it more organized and simple to update, and the explosive barrels at a first glance would be easy to create, simply by passing the explosion array as a pointer to each explosive barrel constructor Cons: It might be hard for the bullets to add an explosion to the vector, since bullets are shot by a Weapon class which is located in every mob, so lets say, if i create a new enemy and add it to the enemy array, that enemy will have a weapon and functions to be able to use it, and if i want the weapon (rocket launcher in this case) to have access to the explosions array to be able to add a new one, id have to pass the explosion array as a pointer to the enemy, which would then pass it to the weapon, which would pass it to the bullets (ugly chain). Another problem I can think of is a little more weird: If im checking the collisions between explosions and barrels (so i create a chain reaction) and i detect an explosion colliding with a barrel, if i add a new explosion while im iterating the explosions java will trow an exception. So this is kinda annoying, i cant iterate through the explosions and add a new explosion, i must do it in another way... The other way which isnt really well thought yet is to just add an explosive component to every element that might explode so that when it dies, it explodes or something, but i dont have good ways on implementing this theory either Honestly i dont like either the solutions so id like to know how is it usually done by actual game developers, sorry if my problem seems trivial and dumb.

    Read the article

  • How do OSes work on multiple CPUs? [on hold]

    - by user3691093
    Assumption: "OS es (atleast in some part) should be written in assembly.Assembly programs are CPU specefic." If so how can one os run on different CPUs ? For example: how is that I can load Ubuntu on different systems having different CPUs (like intel i3,i5,i7, amd a8,a6,etc) from the same bootable disk? Does the disk contain seporate assembly programs for each CPU? Are these CPUs 'similar' enough to run the same assembly program? Is my assumption wrong? Something else.... Thanks for responding. I tried to find out in what way are the CPUs that I mentioned 'similar'. I came across the concepts of Instruction Set Architecture and Microarchitecture of CPUs.A CPU will understand a program if it is combatible with its ISA. Even if CPUs are 'wired up' differently (different microarchitecture) , as long as the ISA implemented on top is same ,the program will work. ARM and x86 have different ISA ( that why there are 2 windows 8 versions, right?). And if an app program is written in an HLL with compilers for both platforms we will saved from wasting time writing 2 programs. Did I understand anything wrong? Are there programs that can take a compiled program as input and produce a program executable on another CPU as output? Is it possible? (Virtualisation?) 32 bit windows programs do install on 64 bit windows ,dont they? Arent 64 bit CPUs 'differerent' from 32 bit CPUs? They do get seporate OS versions, right? Is this backward combatibility achieved using programes mentioned in (3) ?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >