Search Results

Search found 28590 results on 1144 pages for 'best'.

Page 87/1144 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • What is the Proper approach for Constructing a PhysicalAddress object from Byte Array

    - by Paul Farry
    I'm trying to understand what the correct approach for a constructor that accepts a Byte Array with regard to how it stores it's data (specifically with PhysicalAddress) I have an array of 6 bytes (theAddress) that is constructed once. I have a source array of 18bytes (theAddresses) that is loaded from a TCP Connection. I then copy the 6bytes from theAddress+offset into theAddress and construct the PhysicalAddress from it. Problem is that the PhysicalAddress just stores the Reference to the array that was passed in. Therefore if you subsequently check the addresses they only ever point to the last address that was copied in. When I took a look inside the PhysicalAddress with reflector it's easy to see what's going on. public PhysicalAddress(byte[] address) { this.changed = true; this.address = address; } Now I know this can be solved by creating theAddress array on each pass, but I wanted to find out what really is the best practice for this. Should the constructor of an object that accepts a byte array create it's own private Variable for holding the data and copy it from the original Should it just hold the reference to what was passed in. Should I just created theAddress on each pass in the loop

    Read the article

  • Passing arguments and values from HTML to jQuery (events)

    - by Jaroslav Moravec
    What is the practice to pass arguments from HTML to jQuery events function. For example getting id of row from db: <tr class="jq_killMe" id="thisItemId-id"> ... </tr> and jQuery: $(".jq_killMe").click(function () { var tmp = $(this).attr('id).split("-"); var id = tmp[0] // ... } What's the best practise, if I want to pass more than one argument? Is it better not to use jQuery? For example: <tr onclick="killMe('id')"> ... </tr> I didn't find the answer on my question, I will be glad even for links. Thanks. Edit (pre solution) So you suggested two methods to do that: Add custom attributes to element (XHTML) Use attribute ID and parse it by regex Attribute data-* attributes in HTML5 Use hidden children elements I like first solution, but... I would like to (I have to (employer)) produce valid code. Here is a nice question and answers: http://stackoverflow.com/questions/994856/so-what-if-custom-html-attributes-arent-valid-xhtml And the second is not so pretty as the first, but valid. So the compromise is... The third is the solution for future, but here is a lot of CMS where we have to use XHTML or HTML4. (And HTML5 is the long process)

    Read the article

  • Achieving Thread-Safety

    - by Smasher
    Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for? Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that. mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like FKeyList.Sort (CompareKeysFunc); Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?

    Read the article

  • Coding Practices which enable the compiler/optimizer to make a faster program.

    - by EvilTeach
    Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it would be a good idea to keep this variable in an internal register. They also made the tertiary operator to help generate better code. As time passed, the compilers matured. They became very smart in that their flow analysis allowing them to make better decisions about what values to hold in registers than you could possibly do. The register keyword became unimportant. FORTRAN can be faster than C for some sorts of operations, due to alias issues. In theory with careful coding, one can get around this restriction to enable the optimizer to generate faster code. What coding practices are available that may enable the compiler/optimizer to generate faster code? Identifying the platform and compiler you use, would be appreciated. Why does the technique seem to work? Sample code is encouraged. Here is a related question [Edit] This question is not about the overall process to profile, and optimize. Assume that the program has been written correctly, compiled with full optimization, tested and put into production. There may be constructs in your code that prohibit the optimizer from doing the best job that it can. What can you do to refactor that will remove these prohibitions, and allow the optimizer to generate even faster code? [Edit] Offset related link

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • Should I return IEnumerable<T> or IQueryable<T> from my DAL?

    - by Gary '-'
    I know this could be opinion, but I'm looking for best practices. As I understand, IQueryable implements IEnumerable, so in my DAL, I currently have method signatures like the following: IEnumerable<Product> GetProducts(); IEnumerable<Product> GetProductsByCategory(int cateogoryId); Product GetProduct(int productId); Should I be using IQueryable here? What are the pros and cons of either approach? Note that I am planning on using the Repository pattern so I will have a class like so: public class ProductRepository { DBDataContext db = new DBDataContext(<!-- connection string -->); public IEnumerable<Product> GetProductsNew(int daysOld) { return db.GetProducts() .Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld )); } } Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • "Special case" records for foreign key constraints

    - by keithjgrant
    Let's say I have a mysql table, called foo with a foreign key option_id constrained to the option table. When I create a foo record, the user may or may not have selected an option, and 'no option' is a viable selection. What is the best way to differentiate between 'null' (i.e. the user hasn't made a selection yet) and 'no option' (i.e. the user selected 'no option')? Right now, my plan is to insert a special record into the option table. Let's say that winds up with an id of 227 (this table already has a number of records at this point, so '1' isn't available). I have no need to access this record at a database level, and it would act as nothing more than a placeholder that the foreign key in the foo table can reference. So do I just hard-code '227' in my codebase when I'm creating 'foo' records where the user has selected 'no option'? The hard-coded id seems sloppy, and leaves room for error as the code is maintained down the road, but I'm not really sure of another approach.

    Read the article

  • Creating a method for wrapping a loaded image in a UIImageView

    - by eco_bach
    I have the following un my applicationDidFinishLaunching method UIImage *image2 = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image2.jpg" ofType:nil]]; view2 = [[UIImageView alloc] initWithImage:image2]; view2.hidden = YES; [containerView addSubview:view2]; I'm simply adding an image to a view. But because I need to add 30-40 images I need to wrap the above in a function (which returns a UIImageView) and then call it from a loop. This is my first attempt at creating the function -(UIImageView)wrapImageInView:(NSString *)imagePath { UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:imagePath ofType:nil]]; UIImageView *view = [[UIImageView alloc] initWithImage:image]; view.hidden = YES; } And then to invoke it I have the following so far, for simplicity I am only wrapping 3 images //Create an array and add elements to it NSMutableArray *anArray = [[NSMutableArray alloc] init]; [anArray addObject:@"image1.jpg"]; [anArray addObject:@"image2.jpg"]; [anArray addObject:@"image3.jpg"]; //Use a for each loop to iterate through the array for (NSString *s in anArray) { UIImageView *wrappedImgInView=[self wrapImage:s]; [containerView addSubview:wrappedImgInView]; NSLog(s); } //Release the array [anArray release]; I have 2 general questions 1-Is my approach correct?, ie following best practices, for what I am trying to do (load a multiple number of images(jpgs, pngs, etc.) and adding them to a container view) 2-For this to work properly with a large number of images, do I need to keep my array creation separate from my method invocation? Any other suggestions welcome!

    Read the article

  • ASP Dot Net : How to repeat HTML parts with minor differences on a page?

    - by tinky05
    It's a really simple problem. I've got HTML code like this : <div> <img src="image1.jpg" alt="test1" /> </div> <div> <img src="image2.jpg" alt="test2" /> </div> <div> <img src="image3.jpg" alt="test3" /> </div> etc... The data is comming from a DB (image name, alt text). In JAVA, I would do something like : save the info in array in the back end. For the presentation I would loop through it with JSTL : <c:foeach items="${data}" var="${item}> <div> <img src="${item.image}" alt="${item.alt}" /> </div> </c:foreach> What's the best practice in ASP.net I just don't want to create a string with HTML code in it in the "code behind", it's ugly IMO.

    Read the article

  • php form submit and the resend infromation screen

    - by Para
    Hello, I want to ask a best practice question. Suppose I have a form in php with 3 fields say name, email and comment. I submit the form via POST. In PHP I try and insert the date into the database. Suppose the insertion fails. I should now show the user an error and display the form filled in with the data he previously inserted so he can correct his error. Showing the form in it's initial state won't do. So I display the form and the 3 fields are now filled in from PHP with echo or such. Now if I click refresh I get a message saying "Are you sure you want to resend information?". OK. Suppose after I insert the data I don't carry on but I redirect to the same page but with the necessary parameters in the query string. This makes the message go away but I have to carry 3 parameters in the query string. So my question is: How is it better to do this? I want to not carry around lots of parameters in the query string but also not get that error. How can this be done? Should I use cookies to store the form information.

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • What makes an effective UI for displaying versioning of structured hierarchical data

    - by Fadrian Sudaman
    Traditional version control system are displaying versioning information by grouping Projects-Folders-Files with Tree view on the left and details view on the right, then you will click on each item to look at revision history for that configuration history. Assuming that I have all the historical versioning information available for a project from Object-oriented model perspective (e.g. classes - methods - parameters and etc), what do you think will be the most effective way to present such information in UI so that you can easily navigate and access the snapshot view of the project and also the historical versioning information? Put yourself in the position that you are using a tool like this everyday in your job like you are currently using SVN, SS, Perforce or any VCS system, what will contribute to the usability, productivity and effectiveness of the tool. I personally find the classical way for display folders and files like above are very restrictive and less effective for displaying deep nested logical models. Assuming that this is a greenfield project and not restricted by specific technology, how do you think I should best approach this? I am looking for idea and input here to add values to my research project. Feel free to make any suggestions that you think is valuable. Thanks again for anyone that shares their thoughts.

    Read the article

  • Generics vs inheritance (when no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • C++ Matrix class hierachy

    - by bpw1621
    Should a matrix software library have a root class (e.g., MatrixBase) from which more specialized (or more constrained) matrix classes (e.g., SparseMatrix, UpperTriangluarMatrix, etc.) derive? If so, should the derived classes be derived publicly/protectively/privately? If not, should they be composed with a implementation class encapsulating common functionality and be otherwise unrelated? Something else? I was having a conversation about this with a software developer colleague (I am not per se) who mentioned that it is a common programming design mistake to derive a more restricted class from a more general one (e.g., he used the example of how it was not a good idea to derive a Circle class from an Ellipse class as similar to the matrix design issue) even when it is true that a SparseMatrix "IS A" MatrixBase. The interface presented by both the base and derived classes should be the same for basic operations; for specialized operations, a derived class would have additional functionality that might not be possible to implement for an arbitrary MatrixBase object. For example, we can compute the cholesky decomposition only for a PositiveDefiniteMatrix class object; however, multiplication by a scalar should work the same way for both the base and derived classes. Also, even if the underlying data storage implementation differs the operator()(int,int) should work as expected for any type of matrix class. I have started looking at a few open-source matrix libraries and it appears like this is kind of a mixed bag (or maybe I'm looking at a mixed bag of libraries). I am planning on helping out with a refactoring of a math library where this has been a point of contention and I'd like to have opinions (that is unless there really is an objective right answer to this question) as to what design philosophy would be best and what are the pros and cons to any reasonable approach.

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

  • Removing a pattern from the beggining and end of a string in ruby

    - by seaneshbaugh
    So I found myself needing to remove <br /> tags from the beginning and end of strings in a project I'm working on. I made a quick little method that does what I need it to do but I'm not convinced it's the best way to go about doing this sort of thing. I suspect there's probably a handy regular expression I can use to do it in only a couple of lines. Here's what I got: def remove_breaks(text) if text != nil and text != "" text.strip! index = text.rindex("<br />") while index != nil and index == text.length - 6 text = text[0, text.length - 6] text.strip! index = text.rindex("<br />") end text.strip! index = text.index("<br />") while index != nil and index == 0 text = test[6, text.length] text.strip! index = text.index("<br />") end end return text end Now the "<br />" could really be anything, and it'd probably be more useful to make a general use function that takes as an argument the string that needs to be stripped from the beginning and end. I'm open to any suggestions on how to make this cleaner because this just seems like it can be improved.

    Read the article

  • Code promotion: Enforcing the rules

    - by jbarker7
    So here is our problem: We have a small team of developers with their own ways of doing things-- I am trying to formalize a process in which we are required to promote our code in the following order: Local sandbox Dev UAT Staging Live Developers develop/test as they go on their own sandbox, Dev is its own box that we would use for continuous integration, UAT is another site in IIS on the dev box, which uses our dev database. We then promote to staging, which is a site in IIS on the Live box and using live data (just like the live, hence staging). Then, finally, we promote to live. Here are a few of my questions: 1.) Does this seem to be best practice? If not, what needs to be done differently? 2.) How do I enforce the rules to the developers? Often developers skip steps in order to save time... this should not be tolerated and would be great if it could be physically enforced. 3.) How do I enforce these rules to the business group? The business group just wants to get features out FAST. Do we promote only on certain days? Thanks! Josh

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • MVVM and avoiding Monolithic God object

    - by bufferz
    I am in the completion stage of a large project that has several large components: image acquisition, image processing, data storage, factory I/O (automation project) and several others. Each of these components is reasonably independent, but for the project to run as a whole I need at least one instance of each component. Each component also has a ViewModel and View (WPF) for monitoring status and changing things. My question is the safest, most efficient, and most maintainable method of instantiating all of these objects, subscribing one class to an Event in another, and having a common ViewModel and View for all of this. Would it best if I have a class called God that has a private instance of all of these objects? I've done this in the past and regretted it. Or would it be better if God relied on Singleton instances of these objects to get the ball rolling. Alternatively, should Program.cs (or wherever Main(...) is) instantiate all of these components, and pass them to God as parameters and then let Him (snicker) and His ViewModel deal with the particulars of running this projects. Any other suggestions I would love to hear. Thank you!

    Read the article

  • Haskel dot (.) and dollar ($) composition: correct use.

    - by Robert Massaioli
    I have been reading Real World Haskell and I am nearing the end but a matter of style has been niggling at me to do with the (.) and ($) operators. When you write a function that is a composition of other functions you write it like: f = g . h But when you apply something to the end of those functions I write it like this: k = a $ b $ c $ value But the book would write it like this: k = a . b . c $ value Now to me they look functionally equivalent, they do the exact same thing in my eyes. However, the more I look, the more I see people writing their functions in the manner that the book does: compose with (.) first and then only at the end use ($) to append a value to evaluate the lot (nobody does it with many dollar compositions). Is there a reason for using the books way that is much better than using all ($) symbols? Or is there some best practice here that I am not getting? Or is it superfluous and I shouldn't be worrying about it at all? Thanks.

    Read the article

  • How do I correct feature envy in this case?

    - by RMorrisey
    I have some code that looks like: class Parent { private Intermediate intermediateContainer; public Intermediate getIntermediate(); } class Intermediate { private Child child; public Child getChild() {...} public void intermediateOp(); } class Child { public void something(); public void somethingElse(); } class Client { private Parent parent; public void something() { parent.getIntermediate().getChild().something(); } public void somethingElse() { parent.getIntermediate().getChild().somethingElse(); } public void intermediate() { parent.getIntermediate().intermediateOp(); } } I understand that is an example of the "feature envy" code smell. The question is, what's the best way to fix it? My first instinct is to put the three methods on parent: parent.something(); parent.somethingElse(); parent.intermediateOp(); ...but I feel like this duplicates code, and clutters the API of the Parent class (which is already rather busy). Do I want to store the result of getIntermediate(), and/or getChild(), and keep my own references to these objects?

    Read the article

  • Is it "right" to translate error messages?

    - by Iraklis
    This is somehow subjective depending on the target translation language, but bear with me for a sec. I have recently been involved in a translation project. The goal was to translate the strings of an MVC framework to the Greek language. 70% of the language strings of the framework where translated, however 30% where intentionally left out. The decision was that we will not translate error messages aimed towards the developer of the application. The reasoning behind this (in short) was: are aimed towards designers/programmers. Programmers ( and even designers :) ) should have a basic understanding of English, at least enough so they can search on it on Google if they do not know what it means. (racist?) are aimed towards the developer and in a perfect world should not be displayed to the end user of the application as they concern the inner workings of the web application itself. i.e "You must set the database name in your database config file." and perhaps most importantly, they make the life of the developer harder when he tries to get more information/help regarding the error. For example the above error yields 8 results in Google (in quotes), whereas its Greek translation yields exactly 0. I know that this depends on the popularity of the target translation language and the application itself. For example I'm guessing that there are is vast amount of documentation regarding German SAP error messages (i know, i know, SAP IS German, but you get the point), as opposed to Greek Error Messages documentation regarding random application X which has about 500 installations worldwide. So to summarize: When you develop language translation packs for your applications do you translate error messages? Do you only do for predominant languages like English/Spanish/German/French? Or do you live them intact? I'm not looking for the "right" or "correct" answer, I'm looking for a "best-practices" answer, or if this problem is defined in any "official" standard/policy that you have had experience with.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >