Search Results

Search found 69140 results on 2766 pages for 'design time'.

Page 137/2766 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • How to Effectively Create Bullet Patterns

    - by SoulBeaver
    I'm currently creating a top-down shooter like Touhou. The most important factor of the game is that there are many diverse patterns and ways at which bullets are generated and shot at the player, see this video: http://www.youtube.com/watch?v=4Nb5Ohbt1Sg#start=0:60;end=9:53; At the moment, I'm using a class "Pattern" which has a series of steps on moving and shooting. However, I feel this method is quite laborous as I have to create a new Pattern for each attack and perhaps new Bullet classes that will implement a certain behavior. This question received a comment suggesting I should look into BulletML for easy creation and storage of bullets with a specific pattern. It looks decent, but it led me to wonder, what other solutions do you have that I should take into consideration? Update My current design is as follows: An example of an implemented pattern: My GigasPattern first executes a teleport which moves Alice to a certain point (X, Y) on the screen. After this is completed, the pattern starts using the Mover to move the sprite around (whereas teleporting has separate effects and animation). These are of no concern, really, as they are quite simple. The Shooter also creates various Attacks, which are classes again that the Shooter can use to create various patterns of bullets, much like the one in the question I posted. Once the Mover has reached it's destination, both it and the shooter stop and return to an inactive state. The pattern completes, is removed by the AI and a new one gets chosen.

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • How can we best petition to bring Adobe creative software to Ubuntu?

    - by Sixthlaw
    Now I know its not as simple as asking for Adobe to support their design software on Ubuntu, but is there a way for the community and Canonical to make known to Adobe the rapidly growing amount of Linux users, and their desire for this great set of tools OFFICIALLY? I know that many of the answers I receive might be of the fashion "Its not going to happen", "use the free tools provided" or "Who knows it might happen in the near future", but this IS not what I am looking for. I have noticed, on sites like www.OmgUbuntu.com, there are links to pages where people can "like" the idea. Is there a way to try and get the whole community on board with this one, even Canonical, and as stated above, put forward this proposal to Adobe. The current requests for an Adobe CS, for Linux, are in dribs and drabs scattered all of the internet. Now is the best time to come up with productive solutions on how we can best gather statistics on the amount of people willing to buy the Adobe CS. These are the words of an Adobe employee: "I have forwarded this feedback on to the appropriate team who will consider it for future releases of Adobe software." The larger amount of people we have unified in the ONE community proposal, the greater chance we have of getting the software. How can we make this happen?

    Read the article

  • Is wrapping a third party code the only solution to unit test its consumers? [closed]

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Now some people argue that if a library is stable enough and won't change often then there is no need to wrap it. So assuming that Zend_Mail is stable and won't change and it fits my needs entirely, then I won't need a wrapper for it. Now take a look at my class Logger that depends on Zend_Mail: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Now is wrapping Zend_Mail the only solution or is there a better approach to this problem? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Is there a common programming term for the problems of adding features to an already-featureful program?

    - by Jeremy Friesner
    I'm looking for a commonly used programming term to describe a software-engineering phenomenon, which (for lack of a better way to describe it) I'll illustrate first with a couple of examples-by-analogy: Scenario 1: We want to build/extend a subway system on the outskirts of a small town in Wyoming. There are the usual subway-problems to solve, of course (hiring the right construction company, choosing the best route, buying the subway cars), but other than that it's pretty straightforward to implement the system because there aren't a huge number of constraints to satisfy. Scenario 2: Same as above, except now we need to build/extend the subway system in downtown Los Angeles. Here we face all of the problems we did in case (1), but also additional problems -- most of the applicable space is already in use, and has a vocal constituency which will protest loudly if we inconvenience them by repurposing, redesigning, or otherwise modifying the infrastructure that they rely on. Because of this, extensions to the system happen either very slowly and expensively, or they don't happen at all. I sometimes see a similar pattern with software development -- adding a new feature to a small/simple program is straightforward, but as the program grows, adding further new features becomes more and more difficult, if only because it is difficult to integrate the new feature without adversely affecting any of the large number of existing use-cases or user-constituencies. (even with a robust, adaptable program design, you run into the problem of the user interface becoming so elaborate that the program becomes difficult to learn or use) Is there a term for this phenomenon?

    Read the article

  • Advice on reconciling discordant data

    - by Justin
    Let me support my question with a quick scenario. We're writing an app for family meal planning. We'll produce daily plans with a target calorie goal and meals to achieve it for our nuclear family. Our calorie goal will be calculated for each person from their attributes (gender, age, weight, activity level). The weight attribute is the simplest example here. When Dad (the fascist nerd who is inflicting this on his family) first uses the application he throws approximate values into it for Daughter. He thinks she is 5'2" (157 cm) and 125 lbs (56kg). The next day Mom sits down to generate the menu and looks back over what the bumbling Dad did, quietly fumes that he can never recall anything about the family, and says the value is really 118 lbs! This is the first introduction of the discord. It seems, in this scenario, Mom is probably more correct that Dad. Though both are only an approximation of the actual value. The next day the dear Daughter decides to use the program and sees her weight listed. With the vanity only a teenager could muster she changes the weight to 110 lbs. Later that day the Mom returns home from a doctor's visit the Daughter needed and decides that it would be a good idea to update her Daughter's weight in the program. Hooray, another value, this time 117 lbs. Now how do you reconcile these data points? Measurement error, confidence in parties, bias, and more all confound the data. In some idealized world we'd have a weight authority of some nature providing the one and only truth. How about in our world though? And the icing on the cake is that this single data point changes over time. How have you guys solved or managed this conflict?

    Read the article

  • A few questions about how JavaScript works

    - by KayoticSully
    I originally posted on Stack Overflow and was told I might get some better answers here. I have been looking deeply into JavaScript lately to fully understand the language and have a few nagging questions that I can not seem to find answers to (Specifically dealing with Object Oriented programming. I know JavaScript is meant to be used in an OOP manner I just want to understand it for the sake of completeness). Assuming the following code: function TestObject() { this.fA = function() { // do stuff } this.fB = testB; function testB() { // do stuff } } TestObject.prototype = { fC : function { // do stuff } } What is the difference between functions fA and fB? Do they behave exactly the same in scope and potential ability? Is it just convention or is one way technically better or proper? If there is only ever going to be one instance of an object at any given time, would adding a function to the prototype such as fC even be worthwhile? Is there any benefit to doing so? Is the prototype only really useful when dealing with many instances of an object or inheritance? And what is technically the "proper" way to add methods to the prototype the way I have above or calling TestObject.prototype.functionName = function(){} every time? I am looking to keep my JavaScript code as clean and readable as possible but am also very interested in what the proper conventions for Objects are in the language. I come from a Java and PHP background and am trying to not make any assumptions about how JavaScript works since I know it is very different being prototype based. Also are there any definitive JavaScript style guides or documentation about how JavaScript operates at a low level? Thanks!

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Material System

    - by Towelie
    I'm designing Material/Shader System (target API DX10+ and may be OpenGL3+, now only DX10). I know, there was a lot of topics about this, but i can't find what i need. I don't want to do some kind of compilation/parsing scripts in real-time. So there some artist-created material, written at some analog of CG. After it compiled to hlsl code and after to final shader. Also there are some hard-coded ConstantBuffers, like cbuffer EveryFrameChanging { float4x4 matView; float time; float delta; } And shader use shared constant buffers to get parameters. For each mesh in the scene, getting needs and what it can give (normals, binormals etc.) and finding corresponding permutation of shader or calculating missing parts. Also, during build calculating render states and the permutations or hash for this shader which later will be used for sorting or even giving the ID from 0 to ShaderCount w/o gaps to it for sorting. FinalShader have only 1 technique and one pass. After it for each Mesh setting some shader and it's good to render. some pseudo code SetConstantBuffer(ConstantBuffer::PerFrame); foreach (shader in FinalShaders) SetConstantBuffer(ConstantBuffer::PerShader, shader); SetRenderState(shader); foreach (mesh in shader.GetAllMeshes) SetConstantBuffer(ConstantBuffer::PerMesh, mesh); SetBuffers(mesh); Draw(); class FinalShader { public: UUID m_ID; RenderState m_RenderState; CBufferBindings m_BufferBindings; } But i have no idea how to create this CG language and do i really need it?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Should I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority?

    - by JonathonG
    Can I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority, and is it a good idea? Here's more info: I am about to begin work on porting a mid-sized game from Flash/AS3 to Java so that I can continue development with multi-threading capabilities. Here's a small bit of background about the game: The game contains numerous asynchronous activities, such as "world updating" (the game environment is constantly changing based on a set of natural laws and forces), procedural generation of terrain, NPCs, quests, items, etc., and on top of that, the effects of all of the player's interactions with his environment are programmatically calculated in real time, based on a set of constantly changing "stats" and once again, natural laws and forces. All of these things going on at once, in an asynchronous manner, seem to lend themselves to multi-threading very well. My question is: Can I build some kind of central engine that handles the "stacking" of all of these events as they are triggered, and dynamically sorts them out amongst the available threads, and would it be a good idea? As an example: Essentially, every time something happens (IE, a magic missile being generated by a spell, or a bunch of plants need to grow to their next stage), instead of just processing that task right then and adding the new object(s) to a list of managed objects, send a reference to that event to a core "event handler" that throws it into a stack of all other currently queued events, which then sorts them out and orders them according to urgency, splits them between a number of available threads for as-fast-as-possible multithreaded execution.

    Read the article

  • Static / Shared Helper Functions vs Built-In Methods

    - by Nathan
    This is a simple question but a design consideration that I often run across in my day to day development work. Lets say that you have a class that represents some kinds of collection. Public Class ModifiedCustomerOrders Public Property Orders as List(Of ModifiedOrders) End Class Within this class you do all kinds of important work, such as combining many different information sources and, eventually, build the Modified Customer Orders. Now, you have different processes that consume this class, each of which needs a slightly different slice of the ModifiedCustomerOrders items. To enable this, you want to add filtering functionality. How do you go about this? Do you: Add Filtering calls to the ModifiedCustomerOrders class so that you can say: MyOrdersClass.RemoveCanceledOrders() Create a Static / Shared "tooling" class that allows you to call: OrdersFilters.RemoveCanceledOrders(MyOrders) Create an extension method to accomplish the same feat as #2 but with less typing: MyOrders.RemoveCanceledOrders() Create a "Service" method that handles the getting of Orders as appropriate to the calling function, while using one of the previous approaches "under the hood". OrdersService.GetOrdersForProcessA() Others? I tend to prefer the tooling / extension method approaches as they make testing a little bit simpler. Although I dependency inject all my sourcing data into the ModifiedCustomerOrders, having it as part of the class makes it a little bit more complicated to test. Typically, I choose to use extension methods where I am doing parameterless transformations / filters. As they get more complex, I will move it into a static class instead. Thoughts on this approach? How would you approach it?

    Read the article

  • Search multiple tables

    - by gilden
    I have developed a web application that is used mainly for archiving all sorts of textual material (documents, references to articles, books, magazines etc.). There can be any given number of archive tables in my system, each with its own schema. The schema can be changed by a moderator through the application (imagine something similar to a really dumbed down version of phpMyAdmin). Users can search for anything from all of the tables. By using FULLTEXT indexes together with substring searching (fields which do not support FULLTEXT indexing) the script inserts the results of a search to a single table and by ordering these results by the similarity measure I can fairly easily return the paginated results. However, this approach has a few problems: substring searching can only count exact results the 50% rule applies to all tables separately and thus, mysql may not return important matches or too naively discards common words. is quite expensive in terms of query numbers and execution time (not an issue right now as there's not a lot of data yet in the tables). normalized data is not even searched for (I have different tables for categories, languages and file attatchments). My planned solution Create a single table having columns similar to id, table_id, row_id, data Every time a new row is created/modified/deleted in any of the data tables this central table also gets updated with the data column containing a concatenation of all the fields in a row. I could then create a single index for Sphinx and use it for doing searches instead. Are there any more efficient solutions or best practises how to approach this? Thanks.

    Read the article

  • Should I think about switching to another platform as a .Net developer? [closed]

    - by A. Karimi
    I’ve been a developer for about 10 years and I’ve almost worked on Microsoft stack. At the last several years I’ve been introduced to some good practices such as IoC and other primary design patterns. Now I feel so much comfortable using these patterns and concepts and I’m very angry why we didn’t do that earlier! They exist and used by many developers since more than 5 years ago but why I and many of my colleagues began using them a little later. As you may know Java developers are more ahead in these fields (concepts, patterns and …) than .Net developers. Am I right? Now the question is, “Why we (as .NET developers) weren’t ahead so much? Isn’t it because we are using Microsoft stack?”. I know ALT.NET but why we are trying make a closed ecosystem open and finding alternatives for Microsoft Echo Chamber, while there are natively open ecosystems like Java!? I've always liked most of the Microsoft works very much but I’m worried about this issue. I am even ask myself should I move to another platform?

    Read the article

  • Correct way to inject dependencies in Business logic service?

    - by Sri Harsha Velicheti
    Currently the structure of my application is as below Web App -- WCF Service (just a facade) -- Business Logic Services -- Repository - Entity Framework Datacontext Now each of my Business logic service is dependent on more than 5 repositories ( I have interfaces defined for all the repos) and I am doing a Constructor injection right now(poor mans DI instead of using a proper IOC as it was determined that it would be a overkill for our project). Repositories have references to EF datacontexts. Now some of the methods in the Business logic service require only one of the 5 repositories, so If I need to call that method I would end up instantiating a Service which will instatiate all 5 repositories which is a waste. An example: public class SomeService : ISomeService { public(IFirstRepository repo1, ISecondRepository repo2, IThirdRepository repo3) {} // My DoSomething method depends only on repo1 and doesn't use repo2 and repo3 public DoSomething() { //uses repo1 to do some stuff, doesn't use repo2 and repo3 } public DoSomething2() { //uses repo2 and repo3 to do something, doesn't require repo1 } public DoSomething3() { //uses repo3 to do something, doesn't require repo1 and repo2 } } Now if my I have to use DoSomething method on SomeService I end up creating both IFirstRepository,ISecondRepository and IThirdRepository but using only IFirstRepository, now this is bugging me, I can seem to accept that I am un-necessarily creating repositories and not using them. Is this a correct design? Are there any better alternatives? Should I be looking at Lazy instantiation Lazy<T> ?

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • Override methods should call base method?

    - by Trevor Pilley
    I'm just running NDepend against some code that I have written and one of the warnings is Overrides of Method() should call base.Method(). The places this occurs are where I have a base class which has virtual properties and methods with default behaviour but which can be overridden by a class which inherits from the base class and doesn't call the overridden method. For example, in the base class I have a property defined like this: protected virtual char CloseQuote { get { return '"'; } } And then in an inheriting class which uses a different close quote: protected override char CloseQuote { get { return ']'; } } Not all classes which inherit from the base class use different quote characters hence my initial design. The alternatives I thought of were have get/set properties in the base class with the defaults set in the constructor: protected BaseClass() { this.CloseQuote = '"'; } protected char CloseQuote { get; set; } public InheritingClass() { this.CloseQuote = ']'; } Or make the base class require the values as constructor args: protected BaseClass(char closeQuote, ...) { this.CloseQuote = '"'; } protected char CloseQuote { get; private set; } public InheritingClass() base (closeQuote: ']', ...) { } Should I use virtual in a scenario where the base implementation may be replaced instead of extended or should I opt for one of the alternatives I thought of? If so, which would be preferable and why?

    Read the article

  • Did I Inadvertently Create a Mediator in my MVC?

    - by SoulBeaver
    I'm currently working on my first biggish project. It's a frontend facebook application that has, since last Tuesday, spanned some 6000-8000 LOC. I say this because I'm using the MVC, an architecture I have never rigidly enforced in any of my hobby projects. I read part of the PureMVC book, but I didn't quite grasp the concept of the Mediator. Since I didn't understand and didn't see the need for it, my project has yet to use a single mediator. Yesterday I went back to the design board because of some requirement changes and noticed that I could move all UI elements out of the View and into its own class. The View essentially only managed the lifetime of the UI and all events from the UI or Model. Technically, the View has now become a 'Mediator' between the Model and UI. Therefore, I realized today, I could just move all my UI stuff back into the View and create a mediator class that handles all events from the view and model. Is my understanding correct in thinking that I have devolved my View as it currently is (handling events from the Model and UI) into a Mediator and that the UI class is what should be the View?

    Read the article

  • How to pass dynamic information between a form and a service? [closed]

    - by qminator
    I have a design problem and hopefully the braintrust which is stack exchange can help. I have a generic form, which loads a dataset and displays it. It never has direct knowledge of what it contains but can pass it to a service for manipulation (via an Onclick event for example). However, the form might need to alter its behaviour based on the manipulation by the service. Example: The service realises this dataset requires sending of an email by the user and needs to send an instruction to the form to open up a mail form. My idea is thus: I'm thinking about passing back some type of key/name dictionary, filled with commands which the service requires. They could then be interpeted by the form without it need to reference something specific. Example: IF the service decides that the dataset needs to refresh it would send back a key/name pair, I might even be able to chain commands. Refreshing the dataset and sending a mail. Refresh / "Foo" Mail / "[email protected]" The form would reference an action explicitly (Refresh or Mail) but not the instructions themselves. Is this a valid idea or am I wasting time?

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Which are the best ways to organize view hierarchies in GUI interfaces?

    - by none
    I'm currently trying to figure out the best techniques for organizing GUI view hierarchies, that is dividing a window into several panels which are in turn divided into other components. I've given a look to the Composite Design Pattern, but I don't know if I can find better alternatives, so I'd appreciate to know if using the Composite is a good idea, or it would be better looking for some other techniques. I'm currently developing in Java Swing, but I don't think that the framework or the language can have a great impact on this. Any help will be appreciated. ---------EDIT------------ I was currently developing a frame containing three labels, one button and a text field. At the button pressed, the content inside the text field would be searched, and the results written inside the three labels. One of my typical structure would be the following: MainWindow | Main panel | Panel with text field and labels. | Panel with search button Now, as the title explains, I was looking for a suitable way of organizing both the MainPanel and the other two panels. But here came problems, since I'm not sure whether organizing them like attributes or storing inside some data structure (i.e. LinkedList or something like this). Anyway, I don't really think that both my solution are really good, so I'm wondering if there are really better approaches for facing this kind of problems. Hope it helps

    Read the article

  • Should sanity be a property of a programmer or a program?

    - by toplel32
    I design and implement languages, that can range from object notations to markup languages. In many cases I have considered restrictions in favor of sanity (common knowledge), like in the case of control characters in identifiers. There are two consequences to consider before doing this: It takes extra computation It narrows liberty I'm interested to learn how developers think of decisions like this. As you may know Microsoft C# is very open on the contrary. If you really want to prefix your integer as Long with 'l' instead of 'L' and so risk other developers of confusing '1' and 'l', no problem. If you want to name your variables in non-latin script so they will contrast with C#'s latin keywords, no problem. Or if you want to distribute a string over multiple lines and so break a series of indentation, no problem. It is cheap to ensure consistency with restrictions and this makes it tempting to implement. But in the case of disallowing non-latin characters (concerning the second example), it means a discredit to Unicode, because one would not take full advantage of its capacity.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >