Search Results

Search found 37841 results on 1514 pages for 'object state'.

Page 46/1514 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Why is an anemic domain model considered bad in C#/OOP, but very important in F#/FP?

    - by Danny Tuppeny
    In a blog post on F# for fun and profit, it says: In a functional design, it is very important to separate behavior from data. The data types are simple and "dumb". And then separately, you have a number of functions that act on those data types. This is the exact opposite of an object-oriented design, where behavior and data are meant to be combined. After all, that's exactly what a class is. In a truly object-oriented design in fact, you should have nothing but behavior -- the data is private and can only be accessed via methods. In fact, in OOD, not having enough behavior around a data type is considered a Bad Thing, and even has a name: the "anemic domain model". Given that in C# we seem to keep borrowing from F#, and trying to write more functional-style code; how come we're not borrowing the idea of separating data/behavior, and even consider it bad? Is it simply that the definition doesn't with with OOP, or is there a concrete reason that it's bad in C# that for some reason doesn't apply in F# (and in fact, is reversed)? (Note: I'm specifically interested in the differences in C#/F# that could change the opinion of what is good/bad, rather than individuals that may disagree with either opinion in the blog post).

    Read the article

  • Should I use an interface when methods are only similar?

    - by Joshua Harris
    I was posed with the idea of creating an object that checks if a point will collide with a line: public class PointAndLineSegmentCollisionDetector { public void Collides(Point p, LineSegment s) { // ... } } This made me think that if I decided to create a Box object, then I would need a PointAndBoxCollisionDetector and a LineSegmentAndBoxCollisionDetector. I might even realize that I should have a BoxAndBoxCollisionDetector and a LineSegmentAndLineSegmentCollisionDetector. And, when I add new objects that can collide I would need to add even more of these. But, they all have a Collides method, so everything I learned about abstraction is telling me, "Make an interface." public interface CollisionDetector { public void Collides(Spatial s1, Spatial s2); } But now I have a function that only detects some abstract class or interface that is used by Point, LineSegment, Box, etc.. So if I did this then each implementation would have to to a type check to make sure that the types are the appropriate type because the collision algorithm is different for each different type match up. Another solution could be this: public class CollisionDetector { public void Collides(Point p, LineSegment s) { ... } public void Collides(LineSegment s, Box b) { ... } public void Collides(Point p, Box b) { ... } // ... } But, this could end up being a huge class that seems unwieldy, although it would have simplicity in that it is only a bunch of Collide methods. This is similar to C#'s Convert class. Which is nice because it is large, but it is simple to understand how it works. This seems to be the better solution, but I thought I should open it for discussion as a wiki to get other opinions.

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Benefits of classic OOP over Go-like language

    - by tylerl
    I've been thinking a lot about language design and what elements would be necessary for an "ideal" programming language, and studying Google's Go has led me to question a lot of otherwise common knowledge. Specifically, Go seems to have all of the interesting benefits from object oriented programming without actually having any of the structure of an object oriented language. There are no classes, only structures; there is no class/structure inheritance -- only structure embedding. There aren't any hierarchies, no parent classes, no explicit interface implementations. Instead, type casting rules are based on a loose system similar to duck-typing, such that if a struct implements the necessary elements of a "Reader" or a "Request" or an "Encoding", then you can cast it and use it as one. Does such a system obsolete the concept of OOP? Or is there something about OOP as implemented in C++ and Java and C# that is inherently more capable, more maintainable, somehow more powerful that you have to give up when moving to a language like Go? What benefit do you have to give up to gain the simplicity that this new paradigm represents?

    Read the article

  • Composing programs from small simple pieces: OOP vs Functional Programming

    - by Jay Godse
    I started programming when imperative programming languages such as C were virtually the only game in town for paid gigs. I'm not a computer scientist by training so I was only exposed to Assembler and Pascal in school, and not Lisp or Prolog. Over the 1990s, Object-Oriented Programming (OOP) became more popular because one of the marketing memes for OOP was that complex programs could be composed of loosely coupled but well-defined, well-tested, cohesive, and reusable classes and objects. And in many cases that is quite true. Once I learned object-oriented programming my C programs became better because I structured them more like classes and objects. In the last few years (2008-2014) I have programmed in Ruby, an OOP language. However, Ruby has many functional programming (FP) features such as lambdas and procs, which enable a different style of programming using recursion, currying, lazy evaluation and the like. (Through ignorance I am at a loss to explain why these techniques are so great). Very recently, I have written code to use methods from the Ruby Enumerable library, such as map(), reduce(), and select(). Apparently this is a functional style of programming. I have found that using these methods significantly reduce code volume, and make my code easier to debug. Upon reading more about FP, one of the marketing claims made by advocates is that FP enables developers to compose programs out of small well-defined, well-tested, and reusable functions, which leads to less buggy code, and low code volume. QUESTIONS: Is the composition of complex program by using FP techniques contradictory to or complementary to composition of a complex program by using OOP techniques? In which situations is OOP more effective, and when is FP more effective? Is it possible to use both techniques in the same complex program? Do the techniques overlap or contradict each other?

    Read the article

  • An ideal way to decode JSON documents in C?

    - by AzizAG
    Assuming I have an API to consume that uses JSON as a data transmission method, what is an ideal way to decode the JSON returned by each API resource? For example, in Java I'd create a class for each API resource then initiate an object of that class and consume data from it. for example: class UserJson extends JsonParser { public function UserJson(String document) { /*Initial document parsing goes here...*/ } //A bunch of getter methods . . . . } The probably do something like this: UserJson userJson = new UserJson(jsonString);//Initial parsing goes in the constructor String username = userJson.getName();//Parse JSON name property then return it as a String. Or when using a programming language with associative arrays(i.e., hash table) the decoding process doesn't require creating a class: (PHP) $userJson = json_decode($jsonString);//Decode JSON as key=>value $username = $userJson['name']; But, when I'm programming in procedural programming languages (C), I can't go with either method, since C is neither OOP nor supports associative arrays(by default, at least). What is the "correct" method of parsing pre-defined JSON strings(i.e., JSON documents specified by the API provider via examples or documentation)? The method I'm currently using is creating a file for each API resource to parse, the problem with this method is that it's basically a lousy version of the OOP method, as it looks exactly like the OOP method but doesn't provide any OOP benefits(e.g., can't pass an object of the parser, etc.). I've been thinking about encapsulating each API resource parser file in a publicly accessed structure(pointing all functions/publicly usable variables to the structure) then accessing the parser file code from within the structure(parser.parse(), parser.getName(), etc.). As this way looks a bit better than the my current method, it still just a rip off the OOP way, isn't it? Any suggestions for methods to parse JSON documents on procedural programming lanauges? Any comments on the methods I'm currently using(either 3 of them)?

    Read the article

  • Mocking concrete class - Not recommended

    - by Mik378
    I've just read an excerpt of "Growing Object-Oriented Software" book which explains some reasons why mocking concrete class is not recommended. Here some sample code of a unit-test for the MusicCentre class: public class MusicCentreTest { @Test public void startsCdPlayerAtTimeRequested() { final MutableTime scheduledTime = new MutableTime(); CdPlayer player = new CdPlayer() { @Override public void scheduleToStartAt(Time startTime) { scheduledTime.set(startTime); } } MusicCentre centre = new MusicCentre(player); centre.startMediaAt(LATER); assertEquals(LATER, scheduledTime.get()); } } And his first explanation: The problem with this approach is that it leaves the relationship between the objects implicit. I hope we've made clear by now that the intention of Test-Driven Development with Mock Objects is to discover relationships between objects. If I subclass, there's nothing in the domain code to make such a relationship visible, just methods on an object. This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I can't figure out exactly what he means when he says: This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I understand that the service corresponds to MusicCentre's method called startMediaAt. What does he mean by "elsewhere"? The complete excerpt is here: http://www.mockobjects.com/2007/04/test-smell-mocking-concrete-classes.html

    Read the article

  • Data classes: getters and setters or different method design

    - by Frog
    I've been trying to design an interface for a data class I'm writing. This class stores styles for characters, for example whether the character is bold, italic or underlined. But also the font-size and the font-family. So it has different types of member variables. The easiest way to implement this would be to add getters and setters for every member variable, but this just feels wrong to me. It feels way more logical (and more OOP) to call style.format(BOLD, true) instead of style.setBold(true). So to use logical methods insteads of getters/setters. But I am facing two problems while implementing these methods: I would need a big switch statement with all member variables, since you can't access a variable by the contents of a string in C++. Moreover, you can't overload by return type, which means you can't write one getter like style.getFormatting(BOLD) (I know there are some tricks to do this, but these don't allow for parameters, which I would obviously need). However, if I would implement getters and setters, there are also issues. I would have to duplicate quite some code because styles can also have a parent styles, which means the getters have to look not only at the member variables of this style, but also at the variables of the parent styles. Because I wasn't able to figure out how to do this, I decided to ask a question a couple of weeks ago. See Object Oriented Programming: getters/setters or logical names. But in that question I didn't stress it would be just a data object and that I'm not making a text rendering engine, which was the reason one of the people that answered suggested I ask another question while making that clear (because his solution, the decorator pattern, isn't suitable for my problem). So please note that I'm not creating my own text rendering engine, I just use these classes to store data. Because I still haven't been able to find a solution to this problem I'd like to ask this question again: how would you design a styles class like this? And why would you do that? Thanks on forehand!

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • How to model an address type in DDD?

    - by Songo
    I have an User entity that has a Set of Address where Address is a value object: class User{ ... private Set<Address> addresses; ... public setAddresses(Set<Address> addresses){ //set all addresses as a batch } ... } A User can have a home address and a work address, so I should have something that acts as a look up in the database: tbl_address_type ------------------------------------------------ | address_type_id | address_type | ------------------------------------------------ | 1 | work | ------------------------------------------------ | 2 | home | ------------------------------------------------ and correspondingly tbl_address ------------------------------------------------------------------------------------- | address_id | address_description |address_type_id| user_id | ------------------------------------------------------------------------------------- | 1 | 123 main street | 1 | 100 | ------------------------------------------------------------------------------------- | 2 | 456 another street | 1 | 100 | ------------------------------------------------------------------------------------- | 3 | 789 long street | 2 | 200 | ------------------------------------------------------------------------------------- | 4 | 023 short street | 2 | 200 | ------------------------------------------------------------------------------------- Should the address type be modeled as an Entity or Value type? and Why? Is it OK for the Address Value object to hold a reference to the Entity AdressType (in case it was modeled as an entity)? Is this something feasible using Hibernate/NHibernate? If a user can change his home address, should I expose a User.updateHomeAddress(Address homeAddress) function on the User entity itself? How can I enforce that the client passes a Home address and not a work address in this case? (a sample implementation is most welcomed) If I want to get the User's home address via User.getHomeAddress() function, must I load the whole addresses array then loop it and check each for its type till I found the correct type then return it? Is there a more efficient way than this?

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • design of 'game engine' for small javascript games?

    - by Matt Ball
    I'm making a group of two or three simple javascript games for fun. After someone finishes one game, they'll be presented with a harder or easier version of another game depending on whether the original game was won or lost. I have a high-level question about the design of things: So far I've created a class for one game type that manages the interaction with the UI and the state of the game itself. But for tracking how many of the subgames have been won, or for understanding whether the next game presented should be more or less difficult, are there arguments to be made for making a 'game engine' class? How does the engine communicate to the games? For instance, when a game is won, how is that information relayed to the engine? Is there a better or more common design? (If you want to see what I have so far, the games are slowly taking shape here: https://github.com/yosemitebandit/candela and can be viewed at http://yosemitebandit.com/candela)

    Read the article

  • Sprite sheets, Clamp or Wrap?

    - by David
    I'm using a combination of sprite sheets for well, sprites and individual textures for infinite tiling. For the tiling textures I'm obviously using Wrap to draw the entire surface in one call but up until now I've been making a seperate batch using Clamp for drawing sprites from the sprite sheets. The sprite sheets include a border (repeating the edge pixels of each sprite) and my code uses the correct source coordinates for sprites. But since I'm never giving coordinates outside of the texture when drawing sprites (and indeed the border exists to prevent bleed over when filtering) it's struck me that I'd be better off just using Wrap so that I can combine everything into one batch. I just want to be sure that I haven't overlooked something obvious. Is there any reason that Wrap would be harmful when used with a sprite sheet?

    Read the article

  • How to decide whether to implement an operation as Entity operation vs Service operation in Domain Driven Design?

    - by Louis Rhys
    I am reading Evans's Domain Driven Design. The book says that there are entity and there are services. If I were to implement an operation, how to decide whether I should add it as a method on an entity or do it in a service class? e.g. myEntity.DoStuff() or myService.DoStuffOn(myEntity)? Does it depend on whether other entities are involved? If it involves other entities, implement as service operation? But entities can have associations and can traverse it from there too right? Does it depend on stateless or not? But service can also access entities' variable, right? Like in do stuff myService.DoStuffOn, it can have code like if(myEntity.IsX) doSomething(); Which means that it will depend on the state? Or does it depend on complexity? How do you define complex operations?

    Read the article

  • Side effect-free interface on top of a stateful library

    - by beta
    In an interview with John Hughes where he talks about Erlang and Haskell, he has the following to say about using stateful libraries in Erlang: If I want to use a stateful library, I usually build a side effect-free interface on top of it so that I can the use it safely in the rest of my code. What does he mean by this? I am trying to think of an example of how this would look, but my imagination and/or knowledge is failing me.

    Read the article

  • Why does F. Wagner consider "NOT (AI_LARGER_THAN_8.1)" to be ambiguous?

    - by oosterwal
    In his article on Virtual Environments (a part of his VFSM specification method) Ferdinand Wagner describes some new ways of thinking about Boolean Algebra as a software design tool. On page 4 of this PDF article, when describing operators in his system he says this: Control statements need Boolean values. Hence, the names must be used to produce Boolean results. To achieve this we want to combine them together using Boolean operators. There is nothing wrong with usage of AND and OR operators with their Boolean meaning. For instance, we may write: DI_ON OR AI_LARGER_THAN_8.1 AND TIMER_OVER to express the control situation: digital input is on or analog input is larger than 8.1 and timer is over. We cannot use the NOT operator, because the result of the Boolean negation makes sense only for true Boolean values. The result of, for instance, NOT (AI_LARGER_THAN_8.1) would be ambiguous. If "AI_LARGER_THAN_8.1" is acceptable, why would he consider "NOT (AI_LARGER_THAN_8.1)" to be ambiguous?

    Read the article

  • How to test issues in a local development environment that can only be introduced by clustering in production?

    - by Brian Reindel
    We recently clustered an application, and it came to light that because of how we're doing SSL offloading via the load balancer in production it didn't work right. I had to mimic this functionality on my local machine by SSL offloading Apache with a proxy, but it still isn't a 1-to-1 comparison. Similar issues can arise when dealing with stateful applications and sticky sessions. What would be the industry standard for testing this kind of production "black box" scenario in a local environment, especially as it relates to clustering?

    Read the article

  • Why does a proportional controller have a steady state error?

    - by Qantas 94 Heavy
    I've read about feedback loops, how much this steady state error is for a given gain and what to do to remove this steady state error (add integral and/or derivative gains to the controller), but I don't understand at all why this steady state error occurs in the first place. If I understand how a proportional control works correctly, the output is equal to the current output plus the error, multiplied by the proportional gain (Kp). However, wouldn't the error slowly diminish over time as it is added (reaching 0 at infinite time), not have a steady state error? From my confusion, it seems I'm completely misunderstanding how it works - a proper explanation of how this steady state error eventuates would be fantastic.

    Read the article

  • How to verify the Liskov substitution principle in an inheritance hierarchy?

    - by Songo
    Inspired by this answer: Liskov Substitution Principle requires that Preconditions cannot be strengthened in a subtype. Postconditions cannot be weakened in a subtype. Invariants of the supertype must be preserved in a subtype. History constraint (the "history rule"). Objects are regarded as being modifiable only through their methods (encapsulation). Since subtypes may introduce methods that are not present in the supertype, the introduction of these methods may allow state changes in the subtype that are not permissible in the supertype. The history constraint prohibits this. I was hoping if someone would post a class hierarchy that violates these 4 points and how to solve them accordingly. I'm looking for an elaborate explanation for educational purposes on how to identify each of the 4 points in the hierarchy and the best way to fix it. Note: I was hoping to post a code sample for people to work on, but the question itself is about how to identify the faulty hierarchies :)

    Read the article

  • HLSL: An array of textures and sampler states

    - by nate142
    The shader must switch between multiple textures depending on the Alpha value of the original texture for each pixel. Now this would word fine if I didn't have to worry about SamplerStates. I have created my array of textures and can select a texture based on the Alpha value of the pixel. But how do I create an Array of SamplerStates and link it to my array of textures? I attempted to treat the SamplerState as a function by adding the (int i) but that didn't work. Also I can't use Texture.Sample since this is shader model 2.0. //shader model 2.0 (DX9) texture subTextures[255]; SamplerState MeshTextureSampler(int i) { Texture = (subTextures[i]); }; float4 SampleCompoundTexture(float2 texCoord, float4 diffuse) { float4 SelectedColor = SAMPLE_TEXTURE(Texture, texCoord); int i = SelectedColor.a; texture SelectedTx = subTextures[i]; return tex2D(MeshTextureSampler(i), texCoord) * diffuse; }

    Read the article

  • Is it wrong to use a boolean parameter to determine behavior?

    - by Ray
    I have seen a practice from time to time that "feels" wrong, but I can't quite articulate what is wrong about it. Or maybe it's just my prejudice. Here goes: A developer defines a method with a boolean as one of its parameters, and that method calls another, and so on, and eventually that boolean is used, solely to determine whether or not to take a certain action. This might be used, for example, to allow the action only if the user has certain rights, or perhaps if we are (or aren't) in test mode or batch mode or live mode, or perhaps only when the system is in a certain state. Well there is always another way to do it, whether by querying when it is time to take the action (rather than passing the parameter), or by having multiple versions of the method, or multiple implementations of the class, etc. My question isn't so much how to improve this, but rather whether or not it really is wrong (as I suspect), and if it is, what is wrong about it.

    Read the article

  • processing gamestate with a window of commands across time?

    - by rook2pawn
    I have clients sending client updates at a 100ms intervals. i pool the command inputs and create a client command frame. the commands come into the server in these windows and i tag them across time as they come in. when i do a server tick i intend to process this list of commands i.e. [ {command:'duck',timestamp:350,player:'a'}, {command:'shoot',timestamp:395,player:'b'}, {command:'move', timestamp:410,player:'c'} {command:'cover',timestamp:420,player:'a'} ] how would i efficiently update the gamestate based on this list? the two solutions i see are 1) simulate time via direct equation to figure out how far everyone would move or change as if the real gameupdate was ticking on the worldtick..but then unforseen events that would normally trigger during real update would not get triggered such as powerups or collissions 2) prepare to run the worldupdate multiple times and figure out which commands get sent to which worldupdate. this seems better but a little more costly is there a canonical way to do this?

    Read the article

  • Advice on approaching a significant rearrangement/refactoring?

    - by Prog
    I'm working on an application (hobby project, solo programmer, small-medium size), and I have recently redesigned a significant part of it. The program already works in it's current state, but I decided to reimplement things to improve the OO design. I'm about to implement this new design by refactoring a big part of the application. Thing is I'm not sure where to start. Obviously, by the nature of a rearrangement, the moment you change one part of the program several other parts (at least temporarily) break. So it's a little 'scary' to rearrange something in a piece of software that already works. I'm asking for advice or some general guidelines: how should I approach a significant refactoring? When you approach rearranging large parts of your application, where do you start? Note that I'm interested only in re-arranging the high-level structure of the app. I have no intention of rewriting local algorithms.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >