Search Results

Search found 8692 results on 348 pages for 'patterns and practices'.

Page 157/348 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Does it ever make sense to make a fundamental (non-pointer) parameter const?

    - by Scott Smith
    I recently had an exchange with another C++ developer about the following use of const: void Foo(const int bar); He felt that using const in this way was good practice. I argued that it does nothing for the caller of the function (since a copy of the argument was going to be passed, there is no additional guarantee of safety with regard to overwrite). In addition, doing this prevents the implementer of Foo from modifying their private copy of the argument. So, it both mandates and advertises an implementation detail. Not the end of the world, but certainly not something to be recommended as good practice. I'm curious as to what others think on this issue. Edit: OK, I didn't realize that const-ness of the arguments didn't factor into the signature of the function. So, it is possible to mark the arguments as const in the implementation (.cpp), and not in the header (.h) - and the compiler is fine with that. That being the case, I guess the policy should be the same for making local variables const. One could make the argument that having different looking signatures in the header and source file would confuse others (as it would have confused me). While I try to follow the Principle of Least Astonishment with whatever I write, I guess it's reasonable to expect developers to recognize this as legal and useful.

    Read the article

  • What design pattern to use for one big method calling many private methods

    - by Jeune
    I have a class that has a big method that calls on a lot of private methods. I think I want to extract those private methods into their own classes for one because they contain business logic and I think they should be public so they can be unit tested. Here's a sample of the code: public void handleRow(Object arg0) { if (continueRunning){ hashData=(HashMap<String, Object>)arg0; Long stdReportId = null; Date effDate=null; if (stdReportIds!=null){ stdReportId = stdReportIds[index]; } if (effDates!=null){ effDate = effDates[index]; } initAndPutPriceBrackets(hashData, stdReportId, effDate); putBrand(hashData,stdReportId,formHandlerFor==0?true:useLiveForFirst); putMultiLangDescriptions(hashData,stdReportId); index++; if (stdReportIds!=null && stdReportIds[0].equals(stdReportIds[1])){ continueRunning=false; } if (formHandlerFor==REPORTS){ putBeginDate(hashData,effDate,custId); } //handle logic that is related to pricemaps. lstOfData.add(hashData); } } What design pattern should I apply to this problem?

    Read the article

  • Should old/legacy/unused code be deleted from source control repository?

    - by Checkers
    I've encountered this in multiple projects. As the code base evolves, some libraries, applications, and components get abandoned and/or deprecated. Most people prefer to keep them in. The usual argument is that the code does not really take any space, it can be left alone until needed again. So a repository slowly turns into a cesspool of legacy code, where it's hard to find anything. Some people delete old code, since it creates clutter, raises more questions for new people, and you can restore any old snapshot of the code base anyway. However you can't always find the old code if you don't know where to look, as none of the (common) VCS I know offer search over the entire repository including all historical revisions, and the only way to search the old files is to check out the revision where the deleted file exists. What would be a good approach to repository management?

    Read the article

  • Javascript clarity of purpose

    - by JesDaw
    Javascript usage has gotten remarkably more sophisticated and powerful in the past five years. One aspect of this sort of functional programming I struggle with, esp with Javascript’s peculiarities, is how to make clear either through comments or code just what is happening. Often this sort of code takes a while to decipher, even if you understand the prototypal, first-class functional Javascript way. Any thoughts or techniques for making perfectly clear what your code does and how in Javascript? I've asked this question elsewhere, but haven't gotten much response.

    Read the article

  • Scrum backlog sizing is taking forever

    - by zachary
    I work on a huge project. While we program we end up meeting for endless backlog sizing sessions where all the developers sit down with the team and size user stories. Scrum doubters are saying that this process is taking too long and development time is being wasted. My question is how long should it take to size a user story on average? And does anyone have any tips to make these sizing sessions go quicker?

    Read the article

  • Preprocessor "macro function" vs. function pointer - best practice?

    - by Dustin
    I recently started a small personal project (RGB value to BGR value conversion program) in C, and I realised that a function that converts from RGB to BGR can not only perform the conversion but also the inversion. Obviously that means I don't really need two functions rgb2bgr and bgr2rgb. However, does it matter whether I use a function pointer instead of a macro? For example: int rgb2bgr (const int rgb); /* * Should I do this because it allows the compiler to issue * appropriate error messages using the proper function name, * not to mention possible debugging benefits? */ int (*bgr2rgb) (const int bgr) = rgb2bgr; /* * Or should I do this since it is merely a convenience * and they're really the same function anyway? */ #define bgr2rgb(bgr) (rgb2bgr (bgr)) I'm not necessarily looking for a change in execution efficiency as it's more of a subjective question out of curiosity. I am well aware of the fact that type safety is neither lost nor gained using either method. Would the function pointer merely be a convenience or are there more practical benefits to be gained of which I am unaware?

    Read the article

  • True or False: Good design calls for every table to have a primary key, if nothing else, a running i

    - by Velika
    Consider a grocery store scenario (I'm making this up) where you have FACT records that represent a sale transaction, where the columns of the Fact table include SaleItemFact Table ------------------ CustomerID ProductID Price DistributorID DateOfSale Etc Etc Etc Even if there are duplicates in the table when you consider ALL the keys, I would contend that a surrogate running numeric key (i.e. identity column) should be made up, e.g., TransactionNumber of type Integer. I can see someone arguing that a Fact table might not have a unique key (though I'd invent one and waste the 4 bytes, but how about a dimension table?

    Read the article

  • Decimal rounding strategies in enterprise applications

    - by Sapphire
    Well, I am wondering about a thing with rounding decimals, and storing them in DB. Problem is like this: Let's say we have a customer and a invoice. The invoice has total price of $100.495 (due to some discount percentage which is not integer number), but it is shown as $100.50 (when rounded, just for print on invoice). It is stored in the DB with the price of $100.495, which means that when customer makes a deposit of $100.50 it will have $0.005 extra on the account. If this is rounded, it will appear as $0, but after couple of invoices it would keep accumulating, which would appear wrong (although it actually is not). What is best to do in this case. Store the value of $100.50, or leave everything as-is?

    Read the article

  • Is "for(;;)" faster than "while (TRUE)"? If not, why do people use it?

    - by Chris Cooper
    for (;;) { //Something to be done repeatedly } I have seen this sort of thing used a lot, but I think it is rather strange... Wouldn't it be much clearer to say while (TRUE), or something along those lines? I'm guessing that (as is the reason for many-a-programmer to resort to cryptic code) this is a tiny margin faster? Why, and is it REALLY worth it? If so, why not just define it this way: #DEFINE while(TRUE) for(;;)

    Read the article

  • Visitor Pattern can be replaced with Callback functions?

    - by getit
    Is there any significant benefit to using either technique? In case there are variations, the Visitor Pattern I mean is this: http://en.wikipedia.org/wiki/Visitor_pattern And below is an example of using a delegate to achieve the same effect (at least I think it is the same) Say there is a collection of nested elements: Schools contain Departments which contain Students Instead of using the Visitor pattern to perform something on each collection item, why not use a simple callback (Action delegate in C#) Say something like this class Department { List Students; } class School { List Departments; VisitStudents(Action<Student> actionDelegate) { foreach(var dep in this.Departments) { foreach(var stu in dep.Students) { actionDelegate(stu); } } } } School A = new School(); ...//populate collections A.Visit((student)=> { ...Do Something with student... }); *EDIT Example with delegate accepting multiple params Say I wanted to pass both the student and department, I could modify the Action definition like so: Action class School { List Departments; VisitStudents(Action<Student, Department> actionDelegate, Action<Department> d2) { foreach(var dep in this.Departments) { d2(dep); //This performs a different process. //Using Visitor pattern would avoid having to keep adding new delegates. //This looks like the main benefit so far foreach(var stu in dep.Students) { actionDelegate(stu, dep); } } } }

    Read the article

  • Pattern for creating a database schema using JDBC

    - by Space_C0wb0y
    I have a Java-application that loads data from a legacy file format into an SQLite-Database using JDBC. If the database file specified does not exist, it is supposed to create a new one. Currently the schema for the database is hardcoded in the application. I would much rather have it in a separate file as an SQL-Script, but apparently there is now easy way to execute an SQL-Script though JDBC. Is there any other way or a pattern to achieve something like this?

    Read the article

  • In .NET which loop runs faster for or foreach

    - by Binoj Antony
    In c#/VB.NET/.NET which loop runs faster for or foreach? Ever since I read that for loop works faster than foreach a long time ago I assumed it stood true for all collections, generic collection all arrays etc. I scoured google and found few articles but most of them are inconclusive (read comments on the articles) and open ended. What would be ideal is to have each scenarios listed and the best solution for the same e.g: (just example of how it should be) for iterating an array of 1000+ strings - for is better than foreach for iterating over IList (non generic) strings - foreach is better than for Few references found on the web for the same: Original grand old article by Emmanuel Schanzer CodeProject FOREACH Vs. FOR Blog - To foreach or not to foreach that is the question asp.net forum - NET 1.1 C# for vs foreach [Edit] Apart from the readability aspect of it I am really interested in facts and figures, there are applications where the last mile of performance optimization squeezed do matter.

    Read the article

  • Atomic operations on several transactionless external systems

    - by simendsjo
    Say you have an application connecting 3 different external systems. You need to update something in all 3. In case of a failure, you need to roll back the operations. This is not a hard thing to implement, but say operation 3 fails, and when rolling back, the rollback for operation 1 fails! Now the first external system is in an invalid state... I'm thinking a possible solution is to shut down the application and forcing a manual fix of the external system, but then again... It might already have used this information (and perhaps that's why it failed), or we might not have sufficient access. Or it might not even be a good way to rollback the action! Are there some good ways of handling such cases? EDIT: Some application details.. It's a multi user web application. Most of the work is done with scheduled jobs (through Quartz.Net), so most operations is run in it's own thread. Some user actions should trigger jobs that update several systems though. The external systems are somewhat unstable. I Was thinking of changing the application to use the Command and Unit Of Work pattern

    Read the article

  • What is the best Design/Way to keep user connected ?

    - by Fasih Hansmukh
    Am working on a POC for self learning in which I want to keep my user connected in LIVE pattern. For example, A game in which 4 user can play at a time , here I need to keep this user connected to my game . M not good at Socket type of programming and love to do that in Services way.What i wana know is 'What is the best way of doing this'. According to my initial Brain Storming, I have decided that I will use SilverLight(In Browser Or Out of Browser) as Front end [I have no issue in that]. I m more concern in back end. Either I make an handler or make a WCF service or use full duplex service and use pooling mechanism for that. As a random thought I come up with a Timer type logic that will fire every after 10 seconds at clients end and get status like Is it now Its turn to roll a dice Home many user left (in case if some of them left) What are connected user status in game like there score/points ect and update game view according to this at his end Kindly place your best answers here that will help me to learn this. Regards and thanks in Advance EDIT: Starting Bounty as i need more feedback. FH

    Read the article

  • DDD: Enum like entities

    - by Chris
    Hi all, I have the following DB model: **Person table** ID | Name | StateId ------------------------------ 1 Joe 1 2 Peter 1 3 John 2 **State table** ID | Desc ------------------------------ 1 Working 2 Vacation and domain model would be (simplified): public class Person { public int Id { get; } public string Name { get; set; } public State State { get; set; } } public class State { private int id; public string Name { get; set; } } The state might be used in the domain logic e.g.: if(person.State == State.Working) // some logic So from my understanding, the State acts like a value object which is used for domain logic checks. But it also needs to be present in the DB model to represent a clean ERM. So state might be extended to: public class State { private int id; public string Name { get; set; } public static State New {get {return new State([hardCodedIdHere?], [hardCodeNameHere?]);}} } But using this approach the name of the state would be hardcoded into the domain. Do you know what I mean? Is there a standard approach for such a thing? From my point of view what I am trying to do is using an object (which is persisted from the ERM design perspective) as a sort of value object within my domain. What do you think? Question update: Probably my question wasn't clear enough. What I need to know is, how I would use an entity (like the State example) that is stored in a database within my domain logic. To avoid things like: if(person.State.Id == State.Working.Id) // some logic or if(person.State.Id == WORKING_ID) // some logic

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

  • SQL Server error handling: exceptions and the database-client contract

    - by gbn
    We’re a team of SQL Servers database developers. Our clients are a mixed bag of C#/ASP.NET, C# and Java web services, Java/Unix services and some Excel. Our client developers only use stored procedures that we provide and we expect that (where sensible, of course) they treat them like web service methods. Some our client developers don’t like SQL exceptions. They understand them in their languages but they don’t appreciate that the SQL is limited in how we can communicate issues. I don’t just mean SQL errors, such as trying to insert “bob” into a int column. I also mean exceptions such as telling them that a reference value is wrong, or that data has already changed, or they can’t do this because his aggregate is not zero. They’d don’t really have any concrete alternatives: they’ve mentioned that we should output parameters, but we assume an exception means “processing stopped/rolled back. How do folks here handle the database-client contract? Either generally or where there is separation between the DB and client code monkeys. Edits: we use SQL Server 2005 TRY/CATCH exclusively we log all errors after the rollback to an exception table already we're concerned that some of our clients won't check output paramaters and assume everything is OK. We need errors flagged up for support to look at. everything is an exception... the clients are expected to do some message parsing to separate information vs errors. To separate our exceptions from DB engine and calling errors, they should use the error number (ours are all 50,000 of course)

    Read the article

  • Should I make sure arguments aren't null before using them in a function.

    - by Nathan W
    The title may not really explain what I'm really trying to get at, couldn't really think of a way to describe what I mean. I was wondering if it is good practice to check the arguments that a function accepts for nulls or empty before using them. I have this function which just wraps some hash creation like so. Public Shared Function GenerateHash(ByVal FilePath As IO.FileInfo) As String If (FilePath Is Nothing) Then Throw New ArgumentNullException("FilePath") End If Dim _sha As New Security.Cryptography.MD5CryptoServiceProvider Dim _Hash = Convert.ToBase64String(_sha.ComputeHash(New IO.FileStream(FilePath.FullName, IO.FileMode.Open, IO.FileAccess.Read))) Return _Hash End Function As you can see I just takes a IO.Fileinfo as an argument, at the start of the function I am checking to make sure that it is not nothing. I'm wondering is this good practice or should I just let it get to the actual hasher and then throw the exception because it is null.? Thanks.

    Read the article

  • Representing xml through a single class

    - by Charles
    I am trying to abstract away the difficulties of configuring an application that we use. This application takes a xml configuration file and it can be a bit bothersome to manually edit this file, especially when we are trying to setup some automatic testing scenarios. I am finding that reading xml is nice, pretty easy, you get a network of element nodes that you can just go through and build your structures quite nicely. However I am slowly finding that the reverse is not quite so nice. I want to be able to build a xml configuration file through a single easy to use interface and because xml is composed of a system of nodes I am having a lot of struggle trying to maintain the 'easy' part. Does anyone know of any examples or samples that easily and intuitively build xml files without declaring a bunch of element type classes and expect the user to build the network themselves? For example if my desired xml output is like so <cook version="1.1"> <recipe name="chocolate chip cookie"> <ingredients> <ingredient name="flour" amount="2" units="cups"/> <ingredient name="eggs" amount="2" units="" /> <ingredient name="cooking chocolate" amount="5" units="cups" /> </ingredients> <directions> <direction name="step 1">Preheat oven</direction> <direction name="step 2">Mix flour, egg, and chocolate</direction> <direction name="step 2">bake</direction> </directions> </recipe> <recipe name="hot dog"> ... How would I go about designing a class to build that network of elements and make one easy to use interface for creating recipes? Right now I have a recipe object, an ingredient object, and a direction object. The user must make each one, set the attributes in the class and attach them to the root object which assembles the xml elements and outputs the formatted xml. Its not very pretty and I just know there has to be a better way. I am using python so bonus points for pythonic solutions

    Read the article

  • jQuery: Stopping a periodic ajax call?

    - by Legend
    I am writing a small jQuery plugin to update a set of Divs with content obtained using Ajax calls. Initially, let's assume we have 4 divs. I am doing something like this: (function($) { .... .... //main function $.fn.jDIV = { init: function() { ... ... for(var i = 0; i < numDivs; i++) { this.query(i); } this.handlers(); }, query: function(divNum) { //Makes the relevant ajax call }, handlers: function() { for(var i = 0; i < numDivs; i++) { setInterval("$.fn.jDIV.query(" + i + ")", 5000); } } }; })(jQuery); I would like to be able to enable and disable a particular ajax query. I was thinking of adding a "start" and "stop" instead of the "handlers" function and subsequently storing the setInterval handler like this: start: function(divNum) { divs[divNum] = setInterval("$.fn.jDIV.query(" + i + ")", 5000); }, stop: function(divNum) { clearInterval(divs[divNum]); } I did not use jQuery to setup and destroy the event handlers. Is there a better approach (perhaps using more of jQuery) to achieve this?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >