Search Results

Search found 7312 results on 293 pages for 'render quality'.

Page 19/293 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Getting out of my head

    - by BenCole
    (I put this on SO, but it got a couple close votes saying it belonged here instead...) I've spent the last year as a single person team developing a rich-client application (35,000+ LoC, for what it's worth). It's currently stable and in production. However, I know that my skills were rusty at the beginning of the project, so without a doubt there are major issues to the code. At this point, most of the issues are in architecture, structure, or interactions - the easy problems, even architecture/design problems, have already been weeded out. Unfortunately, I've spent so much time with this project that I'm having a hard time thinking outside of it - approaching it from a new perspective to see the flaws deeply buried or inherent in the design. How do I step outside my head and outside my code so I can get a fresh look at this code so I can make it better? Is this less of an issue than I think it is, or is this a problem for other people as well?

    Read the article

  • Getting solutions off the internet. Bad or Good? [closed]

    - by Prometheus87
    I was looking on the internet for common interview questions. I came upon one that was about finding the occurrences of certain characters in an array. The solution written right below it was in my opinion very elegant. But then another thought came to my mind that, this solution is way better than what came to my mind. So even though I know the solution (that most probably someone with a better IQ had provided) my IQ was still the same. So that means that even though i may know the answer, it still wasn't mine. hence if that question was asked and i was hired upon my answer to that question i couldn't reproduce that same elegance in my other ventures within the organization My question is what do you guys think about such "borrowed intelligence"? Is it good? Do you feel that if solutions are found off the internet, it makes you think in that same more elegant way?

    Read the article

  • Method flags as arguments or as member variables?

    - by Martin
    I think the title "Method flags as arguments or as member variables?" may be suboptimal, but as I'm missing any better terminology atm., here goes: I'm currently trying to get my head around the problem of whether flags for a given class (private) method should be passed as function arguments or via member variable and/or whether there is some pattern or name that covers this aspect and/or whether this hints at some other design problems. By example (language could be C++, Java, C#, doesn't really matter IMHO): class Thingamajig { private ResultType DoInternalStuff(FlagType calcSelect) { ResultType res; for (... some loop condition ...) { ... if (calcSelect == typeA) { ... } else if (calcSelect == typeX) { ... } else if ... } ... return res; } private void InteralStuffInvoker(FlagType calcSelect) { ... DoInternalStuff(calcSelect); ... } public void DoThisStuff() { ... some code ... InternalStuffInvoker(typeA); ... some more code ... } public ResultType DoThatStuff() { ... some code ... ResultType x = DoInternalStuff(typeX); ... some more code ... further process x ... return x; } } What we see above is that the method InternalStuffInvoker takes an argument that is not used inside this function at all but is only forwarded to the other private method DoInternalStuff. (Where DoInternalStuffwill be used privately at other places in this class, e.g. in the DoThatStuff (public) method.) An alternative solution would be to add a member variable that carries this information: class Thingamajig { private ResultType DoInternalStuff() { ResultType res; for (... some loop condition ...) { ... if (m_calcSelect == typeA) { ... } ... } ... return res; } private void InteralStuffInvoker() { ... DoInternalStuff(); ... } public void DoThisStuff() { ... some code ... m_calcSelect = typeA; InternalStuffInvoker(); ... some more code ... } public ResultType DoThatStuff() { ... some code ... m_calcSelect = typeX; ResultType x = DoInternalStuff(); ... some more code ... further process x ... return x; } } Especially for deep call chains where the selector-flag for the inner method is selected outside, using a member variable can make the intermediate functions cleaner, as they don't need to carry a pass-through parameter. On the other hand, this member variable isn't really representing any object state (as it's neither set nor available outside), but is really a hidden additional argument for the "inner" private method. What are the pros and cons of each approach?

    Read the article

  • How can I sell a legacy program rewrite to the business?

    - by Wil
    We have a legacy classic ASP application that's been around since 2001. It badly needs to be re-written, but it's working fine from an end user perspective. The reason I feel like a rewrite is necessary is that when we need to update it (which is admittedly not that often) then it takes forever to go through all the spaghetti code and fix problems. Also, adding new features is also a pain since it was architect-ed and coded badly. I've run cost analysis for them on maintenance but they are willing to spend more for the small maintenance jobs than a rewrite. Any suggestions on convincing them otherwise?

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • Is 'Protection' an acceptable Java class name

    - by jonny
    This comes from a closed thread at stack overflow, where there are already some useful answers, though a commenter suggested I post here. I hope this is ok! I'm trying my best to write good readable, code, but often have doubts in my work! I'm creating some code to check the status of some protected software, and have created a class which has methods to check whether the software in use is licensed (there is a separate Licensing class). I've named the class 'Protection', which is currently accessed, via the creation of an appProtect object. The methods in the class allow to check a number of things about the application, in order to confirm that it is in fact licensed for use. Is 'Protection' an acceptable name for such a class? I read somewhere that if you have to think to long in names of methods, classes, objects etc, then perhaps you may not be coding in an Object Oriented way. I've spent a lot of time thinking about this before making this post, which has lead me to doubt the suitability of the name! In creating (and proof reading) this post, I'm starting to seriously doubt my work so far. I'm also thinking I should probably rename the object to applicationProtection rather than appProtect (though am open to any comments on this too?). I'm posting non the less, in the hope that I'll learn something from others views/opinions, even if they're simply confirming I've "done it wrong"!

    Read the article

  • Verification of requirements question

    - by user970696
    Doing a lot of reading about V&V, I would need to clarify the following. A lot of definitons (less formal ones found in books) define verification like that: Verification: The software should conform to its specification. But then they speak about requirement verification, design verification etc. If I say that these items are "software" in terms of applying the definitons, what should I checked them against, what specification should requirements, which is the basic information, conform to? And one more thing: shouldnt be requirements also validated? To make sure they meets the customer needs? All texts I have speak only about SW validation on the end of the dev.process..

    Read the article

  • What is the politically correct way of refactoring other's code?

    - by dukeofgaming
    I'm currently working in a geographically distributed team in a big company. Everybody is just focused on today's tasks and getting things done, however this means sometimes things have to be done the quick way, and that causes problems... you know, same old, same old. I'm bumping into code with several smells such as: big functions pointless utility functions/methods (essentially just to save writing a word), overcomplicated algorithms, extremely big files that should be broken down into different files/classes (1,500+ lines), etc. What would be the best way of improving code without making other developers feel bad/wrong about any proposed improvements?

    Read the article

  • After how much line of code a function should be break down?

    - by Sumeet
    While working on existing code base, I usually come across procedures that contain Abusive use of IF and Switch statements. The procedures consist of overwhelming code, which I think require re-factoring badly. The situation gets worse when I identify that some of these are recursive as well. But this is always a matter of debate as the code is working fine and no one wants to wake up the dragon. But, everyone accepts it is very expensive code to manage. I am wondering if are any recommendations to determine if a particular Method is a culprit and needs a revisit/rewrite , so that it can broken down or polymophized in an effective manner. Are there any Metrics (like no. of lines in procedure) that can be used to identify such segment of code. The checklist or advice to convince everyone, will be great!

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Functional testing in the verification

    - by user970696
    Yesterday my question How come verification does not include actual testing? created a lot of controversy, yet did not reveal the answer for related and very important question: does black box functional testing done by testers belong to verification or validation? ISO 12207:12208 here mentiones testing explicitly only as a validation activity, however, it speaks about validation of requirements of the intended use. For me its more high level, like UAT test cases written by business users ISO mentioned above does not mention any specific verification (7.2.4.3.2)except for Requirement verification, Design verification, Document and Code & Integration verification. The last two can be probably thought as unit and integrated testing. But where is then the regular testing done by testers at the end of the phase? The book I mentioned in the original question mentiones that verification is done by static techniques, yet on the V model graph it describes System testing against high level description as a verification, mentioning it includes all kinds of testing like functional, load etc. In the IEEE standard for V&V, you can read this: Even though the tests and evaluations are not part of the V&V processes, the techniques described in this standard may be useful in performing them. So that is different than in ISO, where validation mentiones testing as the activity. Not to mention a lot of contradicting information on the net. I would really appreciate a reference to e.g. a standard in the answer or explanation of what I missed in the ISO. For me, I am unable to tell where the testers work belong.

    Read the article

  • Which reference provides your definition of "elegant" or "beautiful" code?

    - by Donnied
    This question is phrased in a very specific way - it asks for references. There was a similar question posted which was closed because it was considered a duplicate to a good code question. The Programmers FAQ points out that answers should have references - or its just an unproductive sharing of (seemingly) baseless opinions. There is a difference between shortest code and most elegant code. This becomes clear in several seminal texts: Dijkstra, E. W. (1972). The humble programmer. Communications of the ACM, 15(10), 859–866. Kernighan, B. W., & Plauger, P. J. (1974). Programming style: Examples and counterexamples. ACM Comput. Surv., 6(4), 303–319. Knuth, D. E. (1984). Literate programming. The Computer Journal, 27(2), 97–111. doi:10.1093/comjnl/27.2.97 They all note the importance of clarity over brevity. Kernighan & Plauger (1974) provide descriptions of "good" code, but "good code" is certainly not synonymous with "elegant". Knuth (1984) describes the impo rtance of exposition and "excellence of style" to elegant programs. He cites Hoare - who describes that code should be self documenting. Dijkstra (1972) indicates that beautiful programs optimize efficiency but are not opaque. This sort of conversation is qulaitatively different than a random sharing of opinions. Therefore, the question - Which reference provides your definition of "elegant" or "beautiful" code? "Which *reference*" is not subjective - anything else will most likely shut the thread down, so please supply *references* not opinions.

    Read the article

  • Are flag variables an absolute evil?

    - by dukeofgaming
    I remember doing a couple of projects where I totally neglected using flags and ended up with better architecture/code; however, it is a common practice in other projects I work at, and when code grows and flags are added, IMHO code-spaghetti also grows. Would you say there are any cases where using flags is a good practice or even necessary?, or would you agree that using flags in code are... red flags and should be avoided/refactored; me, I just get by with doing functions/methods that check for states in real time instead. Edit: Not talking about compiler flags

    Read the article

  • System testing - making sure the system conforms to specification. Validation?

    - by user970696
    After weeks of research I have nearly completed my thesis, yet I am unable to clear up my confusion contained in all previous threads here (and in many books): During system testing, we check the system function against system analysis (functional system design) - but that would fit to a definition of verification according to many books. But I follow ISO12207, which considers all testing as validation (making sure work product meets requirement for intended use). How can I justify that unit testing or system testing is validation, even though when I check it against specification? Which fullfils the definiton of verification? When testing that e.g. "Save button" works, is it validation? This picture shows my understanding of V&V, so different from many other sources, including ISTQB etc. Essential problem I have is that a book using the same picture also states on another place that: test activities in the area of validation are usability, alpha and beta testing. For verification, testable system requirements are defined whose correct implementation can be tested through system tests. Isn't that the opposite of what the picture says? Most books present the following picture, where validation is just making sure that customer needs are satisfied. Mind you that according to ISO, validation activity is testing.

    Read the article

  • C: What is a good source to teach standard/basic code conventions to someone newly learning the language ?

    - by shan23
    I'm tutoring someone who can be described as a rank newcomer in C. Understandably, she does not know much about coding conventions generally practiced, and hence all her programs tend to use single letter vars, mismatched spacing/indentation and the like, making it very difficult to read/debug her endeavors. My question is, is there a link/set of guidelines and examples which she can use for adopting basic code conventions ? It should not be too arcane as to scare her off, yet inclusive enough to have the basics covered (so that no one woulc wince looking at the code). Any suggestions ?

    Read the article

  • How can I convince management to deal with technical debt?

    - by Desolate Planet
    This is a question that I often ask myself when working with developers. I've worked at four companies so far and I've become aware of a lack of attention to keeping code clean and dealing with technical debt that hinders future progress in a software app. For example, the first company I worked for had written a database from scratch rather than use something like MySQL and that created hell for the team when refactoring or extending the application. I've always tried to be honest and clear with my manager when he discusses projections, but management doesn't seem interested in fixing what's already there and it's horrible to see the impact it has on team morale. What are your thoughts on the best way to tackle this problem? What I've seen is people packing up and leaving. The company then becomes a revolving door with developers coming in and out and making the code worse. How do you communicate this to management to get them interested in sorting out technical debt?

    Read the article

  • Inspection, code review - is it really testing?

    - by user970696
    ISTQB, Wikipedia or other sources classify verification acitivities (reviews etc.) as a static testing, yet other do not. If we can say that peer reviews and inspections are actually a kind of a testing, then a lot of standards do not make sense (consider e.g. ISO which say that validation is done by testing, while verification by checking of work products) - it should at least say dynamic testing for validation, shouldn't it? I am completing master thesis dealing with QA and I must admit that I have never seen worse and more ambiguous and contradicting literature than in this field :/ Do you think (and if so, why) that static testing is a good and justifiable term or should we stick to testing and static checks/analysis?

    Read the article

  • Checking form input data on submit with pure PHP [migrated]

    - by Leron
    I have some experience with PHP but I have never even try to do this wit pure PHP, but now a friend of mine asked me to help him with this task so I sat down and write some code. What I'm asking is for opinion if this is the right way to do this when you want to use only PHP and is there anything I can change to make the code better. Besides that I think the code is working at least with the few test I made with it. Here it is: <?php session_start(); // define variables and initialize with empty values $name = $address = $email = ""; $nameErr = $addrErr = $emailErr = ""; $_SESSION["name"] = $_SESSION["address"] = $_SESSION["email"] = ""; $_SESSION["first_page"] = false; if ($_SERVER["REQUEST_METHOD"] == "POST") { if (empty($_POST["name"])) { $nameErr = "Missing"; } else { $_SESSION["name"] = $_POST["name"]; $name = $_POST["name"]; } if (empty($_POST["address"])) { $addrErr = "Missing"; } else { $_SESSION["address"] = $_POST["address"]; $address = $_POST["address"]; } if (empty($_POST["email"])) { $emailErr = "Missing"; } else { $_SESSION["email"] = $_POST["email"]; $email = $_POST["email"]; } } if ($_SESSION["name"] != "" && $_SESSION["address"] != "" && $_SESSION["email"] != "") { $_SESSION["first_page"] = true; header('Location: http://localhost/formProcessing2.php'); //echo $_SESSION["name"]. " " .$_SESSION["address"]. " " .$_SESSION["email"]; } ?> <DCTYPE! html> <head> <style> .error { color: #FF0000; } </style> </head> <body> <form method="POST" action="<?php echo htmlspecialchars($_SERVER["PHP_SELF"]);?>"> Name <input type="text" name="name" value="<?php echo htmlspecialchars($name);?>"> <span class="error"><?php echo $nameErr;?></span> <br /> Address <input type="text" name="address" value="<?php echo htmlspecialchars($address);?>"> <span class="error"><?php echo $addrErr;?></span> <br /> Email <input type="text" name="email" value="<?php echo htmlspecialchars($email);?>"> <span class="error"><?php echo $emailErr;?></span> <br /> <input type="submit" name="submit" value="Submit"> </form> </body> </html>

    Read the article

  • Should comments say WHY the program is doing what it is doing? (opinion on a dictum by the inventor of Forth)

    - by AKE
    The often provocative Chuck Moore (inventor of the Forth language) gave the following advice (paraphrasing): "Use comments sparingly. Programs are self-documenting, with a modicum of help from mnemonics. Comments should say WHAT the program is doing, not HOW." My question: Should comments say WHY the program is doing what it is doing? Update: In addition to the answers below, these two provide additional insight. Beginner's guide to writing comments? http://programmers.stackexchange.com/a/98609/62203

    Read the article

  • Single or multiple return statements in a function [on hold]

    - by Juan Carlos Coto
    When writing a function that can have several different return values, particularly when different branches of code return different values, what is the cleanest or sanest way of returning? Please note the following are really contrived examples meant only to illustrate different styles. Example 1: Single return def my_function(): if some_condition: return_value = 1 elif another_condition: return_value = 2 else: return_value = 3 return return_value Example 2: Multiple returns def my_function(): if some_condition: return 1 elif another_condition: return 2 else: return 3 The second example seems simpler and is perhaps more readable. The first one, however, might describe the overall logic a bit better (the conditions affect the assignment of the value, not whether it's returned or not). Is the second way preferable to the first? Why?

    Read the article

  • Are "Compile to JavaScript" Frameworks Hostile to Continuous Integration?

    - by joshin4colours
    Lately we've been looking at ways to improve automated testing and related tooling of our enterprise-level GWT web app. I've realized that in some ways, GWT is a bit hostile to automated testing, mainly because of the nature of the long GWT compile times from Java to JS. This makes unit testing somewhat challenging, but it also puts some roadblocks up for testing in a CI environment. I've also found out that some of our build and deployment processes are somewhat complicated due to the nature of GWT's compile process. Is this a general problem for "compile to JS" frameworks for webapps? I don't have much experience with them, but I can see some potential problems for automated testing and continuous integration and deployment. Some issues I see: Long build and compile times preventing quick deployments Language the app is developed in != JS, preventing good unit testing Obfuscated JS in the actual app makes it more like a executable than a web app Are these issues present in other similar frameworks, or is this more a GWT issue?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >