Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 117/548 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Is it bad to have multiple return statements?

    - by scot
    Hi, I have a code somethg like below: int method(string a ,int b , int c){ if(cond1) return -1; if(cond2 || cond3) return 3; if(cond1 && cond2) return 0; else return -999; } Does it perform badly when compared to having multiple if else and have single return?

    Read the article

  • What should I name my files with generic class definitions?

    - by Tomas Lycken
    I'm writing a couple of classes that all have generic type arguments, but I need to overload the classes because I need a different number of arguments in different scenarios. Basically, I have public class MyGenericClass<T> { ... } public class MyGenericClass<T, K> { ... } public class MyGenericClass<T, K, L> { ... } // it could go on forever, but it won't... I want them all in the same namespace, but in one source file per class. What should I name the files? Is there a best practice?

    Read the article

  • Every flash uploader giving bad progress values.

    - by Mike Boers
    The file upload script I wrote early last year for an internal website has been misbehaving oddly on a number of machines. On some machines it consistently works fine, on others it consistently misbehaves. I am having exactly the same problem with YUI Uploader, SWFUpload (2.2 and 2.5a), and Uploadify. On the misbehaving machines, the progress event (or callback as the case may be) is reporting the upload going far too quickly. It is progressing around 9 or 10MB/s, instead of the 50 or 60kb/s that is actually going on. The progress bar fills up very quickly, and then no more progress events are triggered. A few minutes later the completion event will trigger when the upload is actually done. I must emphasize that the file upload does proceed normally, even though the progress being reported is very wrong. The progress events are reporting a correct file size, but the reported amount uploaded is usually way too high, and it appears that it is always a multiple of 2^16 (65536). I'm only having this problem with Firefox 3.5 on Windows XP, all of which have various subversions of Flash 10. Has anyone heard of this happening, or have any idea what is going on? (I'm off to go file a number of bug reports, but hopefully someone here has some previous experience with this.)

    Read the article

  • Random directions, with no repeat.. (Bad Description)

    - by Neurofluxation
    Hey there, So I'm knocking together a random pattern generation thing. My code so far: int permutes = 100; int y = 31; int x = 63; while (permutes > 0) { int rndTurn = random(1, 4); if (rndTurn == 1) { y = y - 1; } //go up if (rndTurn == 2) { y = y + 1; } //go down if (rndTurn == 3) { x = x - 1; } //go right if (rndTurn == 4) { x = x + 1; } //go left setP(x, y, 1); delay(250); } My question is, how would I go about getting the code to not go back on itself? e.g. The code says "Go Left" but the next loop through it says "Go Right", how can I stop this? NOTE: setP turns a specific pixel on. Cheers peoples!

    Read the article

  • Use database field maxlength as html layout input maxlength best practice. asp.net mvc

    - by Andrew Florko
    Hello everybody, There are string length limitations in database structure (email is declared as nvarchar[30] for instance) There are lots of html forms that has input textbox fields that should be limited in length for that reason. What is the best practice to synchronize database fields and html layout input fields length limitations ? Can it be done automatically (html layout input fields declared the same max length as database data they represent)? Thank you in advance.

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • Fast read of certain bytes of multiple files in C/C++

    - by Alejandro Cámara
    I've been searching in the web about this question and although there are many similar questions about read/write in C/C++, I haven't found about this specific task. I want to be able to read from multiple files (256x256 files) only sizeof(double) bytes located in a certain position of each file. Right now my solution is, for each file: Open the file (read, binary mode): fstream fTest("current_file", ios_base::out | ios_base::binary); Seek the position I want to read: fTest.seekg(position*sizeof(test_value), ios_base::beg); Read the bytes: fTest.read((char *) &(output[i][j]), sizeof(test_value)); And close the file: fTest.close(); This takes about 350 ms to run inside a for{ for {} } structure with 256x256 iterations (one for each file). Q: Do you think there is a better way to implement this operation? How would you do it?

    Read the article

  • Turning Floats into Their Closest (UTF-8 Character) Fraction.

    - by Mark Tomlin
    I want to take any real number, and return the closest number, with the closest fraction as available in the UTF-8 character set, appropriate. 0/4 = 0.00 = # < .125 1/4 = 0.25 = ¼ # > .125 & < .375 2/4 = 0.50 = ½ # > .375 & < .625 3/4 = 0.75 = ¾ # > .625 & < .875 4/4 = 1.00 = # > .875 I made this function to do that task: function displayFraction($realNumber) { if (!is_float($realNumber)) { return $realNumber; } list($number, $decimal) = explode('.', $realNumber); $decimal = '.' . $decimal; switch($decimal) { case $decimal < 0.125: return $number; case $decimal > 0.125 && $decimal < 0.375: return $number . '¼'; # 188 ¼ &#188; case $decimal > 0.375 && $decimal < 0.625: return $number . '½'; # 189 ½ &#189; case $decimal > 0.625 && $decimal < 0.875: return $number . '¾'; # 190 ¾ &#190; case $decimal < 0.875: return ++$number; } } What are the better / diffrent way to do this? echo displayFraction(3.1) . PHP_EOL; # Outputs: 3 echo displayFraction(3.141593) . PHP_EOL; # Outputs: 3¼ echo displayFraction(3.267432) . PHP_EOL; # Outputs: 3¼ echo displayFraction(3.38) . PHP_EOL; # Outputs: 3½ Expand my mind!

    Read the article

  • Is there a difference here?

    - by HotHead
    Please consider following code: 1. uint16 a = 0x0001; if(a < 0x0002) { // do something } 2. uint16 a = 0x0001; if(a < uint16(0x0002)) { // do something } 3. uint16 a = 0x0001; if(a < static_cast<uint16>(0x0002)) { // do something } 4. uint16 a = 0x0001; uint16 b = 0x0002; if(a < b) { // do something } What compiler does in backgorund and what is the best (and correct) way to do above testing? p.s. sorry, but I couldn't find the better title :) Thank you in advance!

    Read the article

  • Any difference in compiler behavior for each of these snippets?

    - by HotHead
    Please consider following code: 1. uint16 a = 0x0001; if(a < 0x0002) { // do something } 2. uint16 a = 0x0001; if(a < uint16(0x0002)) { // do something } 3. uint16 a = 0x0001; if(a < static_cast<uint16>(0x0002)) { // do something } 4. uint16 a = 0x0001; uint16 b = 0x0002; if(a < b) { // do something } What compiler does in backgorund and what is the best (and correct) way to do above testing? p.s. sorry, but I couldn't find the better title :) EDIT: values 0x0001 and 0x0002 are only example. There coudl be any 2 byte value instead. Thank you in advance!

    Read the article

  • Why should I use a container div in HTML?

    - by lara.robertson
    I am currently learning html/css, and have noticed a common technique is to place a generic container div in the root of the body tag: <html> <head> ... </head> <body> <div id="container"> ... </div> </body> </html> Is there a valid reason for doing this? Why can't the css just reference the body tag?

    Read the article

  • how to wrap a function that only takes individual elements to make it take a list

    - by stevejb
    Hello, Say I have a function handed to me that I cannot change and must use as is. This function takes several objects in the form of oldFunction( object1, object2, object3, ...) where ... are other arguments. I want to write a wrapper to take a list of objects. My idea was this. sjb.ListWrapper <- function(myList,...) { lLen <- length(myList) myStr <- "" for( i in 1:lLen) { myStr <- paste(myStr, "myList[[", i , "]],",sep="") } myCode <- paste("oldFunction(", myStr, "...)") eval({myCode}) } However, the issue is that I want to use this from Sweave and I need the output of oldFunction to be printed. What is the right way to do this? Thanks.

    Read the article

  • What are some good ways to write PHP application with modules support?

    - by Gabriel
    Hi, I'm starting to write a application in php with one of my friends and was wondering, if you have any advice on how to implement module support into our application. Or is there a way how to automatically load modules written in php by a php application? Or should i just rely on __autoload function? And we are not using any kind of framework, for now at least.

    Read the article

  • Bad event on java panel

    - by LucaB
    Hi I have a java panel with 4 buttons. When I click on of these buttons, a new frame appears and the first is hidden with setVisibile(false). On that new window, I have another button, but when i click it, I got the event corresponding to the fourth button of the first window. Clicking the button again does the trick, but of course this is not acceptable. Am I missing something? I just show the frames with nameOfTheFrame.setVisible(true); and I have MouseListeners on every button. The code of the last button is simply: System.exit(0); EDIT Sample code: private void btn_joinGamePressed(java.awt.event.MouseEvent evt) { GraphicsTools.getInstance().getCreateGame().setVisible(false); GraphicsTools.getInstance().getMainPanel().setVisible(false); GraphicsTools.getInstance().getRegistration().setVisible(true); } GraphicsTools is a Singleton.

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • What are the virtues of using XML comments in .NET?

    - by Michal Czardybon
    I can't understand the virtues of using XML comments. I know they can be converted into nice documentation external to the code, but the same can be achieved with the much more concise DOxygen syntax. In my opinion the XML comments are wrong, because: They obfuscate the comments and the code in general. (They are more difficult to read by humans). Less code can be viewed on a single screen, because "summary" and "/summary" take additional lines. They suggest that all method parameters have to be commented, whereas 90% of them are obvious and SHOULD be left not commented. The only problem I have with this is that my point of view seems to be in minority. Why?

    Read the article

  • How atomic *should* I make an Ajax form?

    - by b. e. hollenbeck
    I have some web forms that I'm bringing over with AJAX, and as I was dealing with the database on the back end, I thought that it might be easier to just handle each input on the form atomically with AJAX, saving the form in 'real time' as the user edits it. The forms are ~20 fields of administrative settings. Would this create massive overhead with the app, cause it to be error-prone, or is this a feasible idea? Of course, contingent operations (like a checkbox that then requires a text entry) would be held until the textbox gained and lost focus. Comments?

    Read the article

  • Best practice with respect to NPE and multiple expressions on single line

    - by JRL
    I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example: getThis().doThat(); vs Object o = getThis(); o.doThat(); The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements. So my questions around this are: Is this problem something worth designing around? Is it better to go for the first or second possibility? Is the creation of a variable name something that would have an effect performance-wise? Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?

    Read the article

  • C# Property Access vs Interface Implementation

    - by ehdv
    I'm writing a class to represent a Pivot Collection, the root object recognized by Pivot. A Collection has several attributes, a list of facet categories (each represented by a FacetCategory object) and a list of items (each represented by a PivotItem object). Therefore, an extremely simplified Collection reads: public class Collection { private List<FacetCategory> categories; private List<PivotItem> items; // other attributes } What I'm unsure of is how to properly grant access to those two lists. Because declaration order of both facet categories and items is visible to the user, I can't use sets, but the class also shouldn't allow duplicate categories or items. Furthermore, I'd like to make the Collection object as easy to use as possible. So my choices are: Have Collection implement IList<PivotItem> and have accessor methods for FacetCategory: In this case, one would add an item to Collection foo by writing foo.Add(bar). This works, but since a Collection is equally both kinds of list making it only pass as a list for one type (category or item) seems like a subpar solution. Create nested wrapper classes for List (CategoryList and ItemList). This has the advantage of making a consistent interface but the downside is that these properties would no longer be able to serve as lists (because I need to override the non-virtual Add method I have to implement IList rather than subclass List. Implicit casting wouldn't work because that would return the Add method to its normal behavior. Also, for reasons I can't figure out, IList is missing an AddRange method... public class Collection { private class CategoryList: IList<FacetCategory> { // ... } private readonly CategoryList categories = new CategoryList(); private readonly ItemList items = new ItemList(); public CategoryList FacetCategories { get { return categories; } set { categories.Clear(); categories.AddRange(value); } } public ItemList Items { get { return items; } set { items.Clear(); items.AddRange(value); } } } Finally, the third option is to combine options one and two, so that Collection implements IList<PivotItem> and has a property FacetCategories. Question: Which of these three is most appropriate, and why?

    Read the article

  • Is there any reason to use C instead of C++ for embedded development?

    - by Piotr Czapla
    Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I though the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists of C developers mainly so advance features of C++ won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it so I will choose one answer that made me think most.

    Read the article

  • Where to store selected language on multilingual site: session/cookies or url?

    - by tig
    I have a site that has all its content translated to multiple languages and has no accounts (to set prefered language there). I can detect preferred language using Accept-Language, ip or anything else. I have 3 ways to store user language selection: Detect language and store it in cookie/session and allow switching language (and also store it in cookie/session) Use detected language if there is no language specified in url, and show links to url with different language Use default site language and show links to other languages Storing langage in url can be of any type: different domain, subdomain, or somewhere in url I think about first case as it allows me to send one url to anyone and it will be presented to them in their preferred language. But another opinion is that different language means different data, so it must have different link.

    Read the article

  • In a PHP project, how do you organize and access your helper objects?

    - by Pekka
    How do you organize and manage your helper objects like the database engine, user notification, error handling and so on in a PHP based, object oriented project? Say I have a large PHP CMS. The CMS is organized in various classes. A few examples: the database object user management an API to create/modify/delete items a messaging object to display messages to the end user a context handler that takes you to the right page a navigation bar class that shows buttons a logging object possibly, custom error handling etc. I am dealing with the eternal question, how to best make these objects accessible to each part of the system that needs it. my first apporach, many years ago was to have a $application global that contained initialized instances of these classes. global $application; $application->messageHandler->addMessage("Item successfully inserted"); I then changed over to the Singleton pattern and a factory function: $mh =&factory("messageHandler"); $mh->addMessage("Item successfully inserted"); but I'm not happy with that either. Unit tests and encapsulation become more and more important to me, and in my understanding the logic behind globals/singletons destroys the basic idea of OOP. Then there is of course the possibility of giving each object a number of pointers to the helper objects it needs, probably the very cleanest, resource-saving and testing-friendly way but I have doubts about the maintainability of this in the long run. Most PHP frameworks I have looked into use either the singleton pattern, or functions that access the initialized objects. Both fine approaches, but as I said I'm happy with neither. I would like to broaden my horizon on what is possible here and what others have done. I am looking for examples, additional ideas and pointers towards resources that discuss this from a long-term, real-world perspective. Also, I'm interested to hear about specialized, niche or plain weird approaches to the issue. Bounty I am following the popular vote in awarding the bounty, the answer which is probably also going to give me the most. Thank you for all your answers!

    Read the article

  • Code Design Process?

    - by user156814
    I am going to be working on a project, a web application. I was reading 37signals getting real pamphlet online (http://gettingreal.37signals.com/), and I understand the recommended process to build the entire website. Brainstorm, sketch, HTML, code. They touch on each process lightly, but they never really talk much about the coding process (all they say is to keep code lean). I've been reading about different ways to go about it (top to bottom, bottom to top) but I dont know much about each way. I even read somewhere that one should write tests for the code before they actually write the code??? WHAT? What coding process should one follow when building an application. if its necessary, I'm using PHP and a framework.

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >