Search Results

Search found 4835 results on 194 pages for 'coding hero'.

Page 5/194 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Objective PHP and key value coding

    - by Lukasz
    Hi Guys. In some part of my code I need something like this: $product_type = $product->type; $price_field = 'field_'.$product_type.'_price'; $price = $product->$$price_field; In other words I need kind of KVC - means get object field by field name produced at the runtime. I simply need to extend some existing system and keep field naming convention so do not advice me to change field names instead. I know something like this works for arrays, when you could easily do that by: $price = $product[$price_field_key]. So I can produce key for array dynamically. But how to do that for objects? Please help me as google gives me river of results for arrays, etc... Thank you

    Read the article

  • Coding Conventions - Naming Enums

    - by Walter White
    Hi all, Is there a document describing how to name enumerations? My preference is that an enum is a type. So, for instance, you have an enum Fruit{Apple,Orange,Banana,Pear, ... } NetworkConnectionType{LAN,Data_3g,Data_4g, ... } I am opposed to naming it: FruitEnum NetworkConnectionTypeEnum I understand it is easy to pick off which files are enums, but then you would also have: NetworkConnectionClass FruitClass Also, is there a good document describing the same for constants, where to declare them, etc.? Walter

    Read the article

  • In Java it seems Public constructors are always a bad coding practice

    - by Adam Gent
    This maybe a controversial question and may not be suited for this forum (so I will not be insulted if you choose to close this question). It seems given the current capabilities of Java there is no reason to make constructors public ... ever. Friendly, private, protected are OK but public no. It seems that its almost always a better idea to provide a public static method for creating objects. Every Java Bean serialization technology (JAXB, Jackson, Spring etc...) can call a protected or private no-arg constructor. My questions are: I have never seen this practice decreed or written down anywhere? Maybe Bloch mentions it but I don't own is book. Is there a use case other than perhaps not being super DRY that I missed? EDIT: I explain why static methods are better. .1. For one you get better type inference. For example See Guava's http://code.google.com/p/guava-libraries/wiki/CollectionUtilitiesExplained .2. As a designer of the class you can later change what is returned with a static method. .3. Dealing with constructor inheritance is painful especially if you have to pre-calculate something.

    Read the article

  • Huffman coding two characters as one

    - by Adomas
    Hi, I need huffman code(best in python or in java), which could encode text not by one character (a = 10, b = 11), but by two (ab = 11, ag = 10). Is it possible and if yes, where could i find it, maybe it's somewhere in the internet and i just can'd find it?

    Read the article

  • on coding style

    - by user12607414
    I vastly prefer coding to discussing coding style, just as I would prefer to write poetry instead of talking about how it should be written. Sometimes the topic cannot be put off, either because some individual coder is messing up a shared code base and needs to be corrected, or (worse) because some officious soul has decided, "what we really need around here are some strongly enforced style rules!" Neither is the case at the moment, and yet I will venture a post on the subject. The following are not rules, but suggested etiquette. The idea is to allow a coherent style of coding to flourish safely and sanely, as a humane, inductive, social process. Maxim M1: Observe, respect, and imitate the largest-scale precedents available. (Preserve styles of whitespace, capitalization, punctuation, abbreviation, name choice, code block size, factorization, type of comments, class organization, file naming, etc., etc., etc.) Maxim M2: Don't add weight to small-scale variations. (Realize that Maxim M1 has been broken many times, but don't take that as license to create further irregularities.) Maxim M3: Listen to and rely on your reviewers to help you perceive your own coding quirks. (When you review, help the coder do this.) Maxim M4: When you touch some code, try to leave it more readable than you found it. (When you review such changes, thank the coder for the cleanup. When you plan changes, plan for cleanups.) On the Hotspot project, which is almost 1.5 decades old, we have often practiced and benefited from such etiquette. The process is, and should be, inductive, not prescriptive. An ounce of neighborliness is better than a pound of police-work. Reality check: If you actually look at (or live in) the Hotspot code base, you will find we have accumulated many annoying irregularities in our source base. I suppose this is the normal condition of a lived-in space. Unless you want to spend all your time polishing and tidying, you can't live without some smudge and clutter, can you? Final digression: Grammars and dictionaries and other prescriptive rule books are sometimes useful, but we humans learn and maintain our language by example not grammar. The same applies to style rules. Actually, I think the process of maintaining a clean and pleasant working code base is an instance of a community maintaining its common linguistic identity. BTW, I've been reading and listening to John McWhorter lately with great pleasure. (If you end with a digression, is it a tail-digression?)

    Read the article

  • I can't program because the code I am using uses old coding styles. Is this normal to programmers? [closed]

    - by Renato Dinhani Conceição
    I'm in my first real job as programmer, but I can't solve any problems because of the coding style used. The code here: Does not have comments Does not have functions (50, 100, 200, 300 or more lines executed in sequence) Uses a lot of if statements with a lot of paths Has variables that make no sense (eg.: cf_cfop, CF_Natop, lnom, r_procod) Uses an old language (Visual FoxPro 8 from 2002), but there are new releases from 2007. I feel like I have gone back to 1970. Is it normal for a programmer familiar with OOP, clean-code, design patterns, etc. to have trouble with coding in this old-fashion way? EDIT: All the answers are very good. For my (un)hope, appears that there are a lot of this kind of code bases around the world. A point mentioned to all answers is refactor the code. Yeah, I really like to do it. In my personal project, I always do this, but... I can't refactor the code. Programmers are only allowed to change the files in the task that they are designed for. Every change in old code must be keep commented in the code (even with Subversion as version control), plus meta informations (date, programmer, task) related to that change (this became a mess, there are code with 3 used lines and 50 old lines commented). I'm thinking that is not only a code problem, but a management of software development problem.

    Read the article

  • What is the best aproach for coding in a slow compilation environment

    - by Andrew
    I used to coding in C# in a TDD style - write/or change a small chunk of code, re-compile in 10 seconds the whole solution, re-run the tests and again. Easy... That development methodology worked very well for me for a few years, until a last year when I had to go back to C++ coding and it really feels that my productivity has dramatically decreased since. The C++ as a language is not a problem - I had quite a lot fo C++ dev experience... but in the past. My productivity is still OK for a small projects, but it gets worse when with the increase of the project size and once compilation time hits 10+ minutes it gets really bad. And if I find the error I have to start compilation again, etc. That is just purely frustrating. Thus I concluded that in a small chunks (as before) is not acceptable - any recommendations how can I get myself into the old gone habit of coding for an hour or so, when reviewing the code manually (without relying on a fast C# compiler), and only recompiling/re-running unit tests once in a couple of hours. With a C# and TDD it was very easy to write a code in a evolutionary way - after a dozen of iterations whatever crap I started with was ending up in a good code, but it just does not work for me anymore (in a slow compilation environment). Would really appreciate your inputs and recos. p.s. not sure how to tag the question - anyone is welcome to re-tag the question appropriately. Cheers.

    Read the article

  • Most effective work habit for coding? [on hold]

    - by Cris
    Working on a big solo project (~15,000 LOC), I am encountering the following phenomenon: I seem to work best when I program in short bursts of 10-15 minutes. Right now I am working on a section which is a complete first time for me architecturally and if I have any architectural issues that emerge when doing the implementation, I seem to be able to best serve these by taking a total break. Then, later, sketching out the ideas on some paper. And when I feel I have sufficient clarity, then going back to code. This iterates until that architectural issue for that section is resolved. This seems quite counter intuitive: that I can progress more quickly by coding less, and taking more breaks. I am nearing the end of the sections which are "first times" for me, and about to dive into stuff which I am much more familiar and am wondering if this counter intuitive efficiency will continue. So my question is: even for regular coding of sections one is familiar with, which don't require constant re-clarification of the best architecture, is more progress to be attained by taking more breaks and coding in bursts?

    Read the article

  • why those chinese indent code so differently?

    - by winston
    currently i am working with some programmer from shanghai i notice they have some coding indentation like these: if(1==1 && 2==2) { a = 3; } else { b = 4; } however i am accustomed to: if (1==1 && 2==2) { a = 3; } else { b = 4; } what do you think? how could i get rid of different coding styles within a single program file?

    Read the article

  • HTML coding style: attribute starts on a new line

    - by Matty
    sublvl's front end developer seems to have a strange coding style that I've never seen before. Every time they begin a new element, immediately after the element name they insert a line break. The first thing that appears on the next line is the first attribute of the element. For example: id="player-container"><div id="player-bar"><div id="player-controls-wrapper"><div id="player-controls"><div id ="player-controls-buttons"> <a The above code was found here. I've never seen this kind of coding style before. What's going on here? Is this just a quirky style or is there some reasoning behind it?

    Read the article

  • How to organize a Coding Dojo?

    - by Stephan
    Over on stack overflow it was asked how to organize a coding dojo (http://stackoverflow.com/questions/4338567/how-to-organize-a-coding-dojo-event). I believe that may have been the wrong forum... I wonder the same thing: how is a Codeing Dojo organized? What is the structure of a meeting? How would one pick Katas? What do you plan ahead of time? I am interested in any ideas on this as well as links to any resource that may be outlining this.

    Read the article

  • Can coding style cause or influence memory fragmentation?

    - by Robert Dailey
    As the title states, I'd like to know if coding style can cause or influence memory fragmentation in a native application, specifically one written using C++. If it does, I'd like to know how. An example of what I mean by coding style is using std::string to represent strings (even static strings) and perform operations on them instead of using the C Library (such as strcmp, strlen, and so on) which can work both on dynamic strings and static strings (the latter point is beneficial since it does not require an additional allocation to access string functions, which is not the case with std::string). A "forward-looking" attitude I have with C++ is to not use the CRT, since to do so would, in a way, be a step backwards. However, such a style results in more dynamic allocations, and especially for a long living application like a server, this causes some speculation that memory fragmentation might become a problem.

    Read the article

  • What's the most productive coding environment

    - by Ubiguchi
    I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best "to CIMP, or Code In My Pants". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?

    Read the article

  • Using "Google Guava" in coding interviews

    - by kbgn27
    I attended a in-person interview recently and performed well. But surprisingly I got rejected. When I asked the HR for reason, he contacted the technical interviewer and told me that I was syntactically wrong while coding. I used Google Guava for coding. So my code looked like this: List<String> items = Lists.newArrayList() instead of List<String> items =new ArrayList<String>(); I know that the code will compile and work as expected.Is it ok to use third party libraries like Google Guava in interviews?

    Read the article

  • Lazy coding is fun

    - by Anthony Trudeau
    Every once in awhile I get the opportunity to write an application that is important enough to do, but not important enough to do the right way -- meaning standards, best practices, good architecture, et al.  I call it lazy coding.  The industry calls it RAD (rapid application development). I started on the conversion tool at the end of last week.  It will convert our legacy data to a completely new system which I'm working on piece by piece.  It will be used in the future, but only the new parts because it'll only be necessary to convert the individual pieces of the data once.  It was the perfect opportunity to just whip something together, but it was still functional unlike a prototype or proof of concept.  Although I would never write an application like this for a customer (internal or external) this methodology (if you can call it that) works great for something like this. I wouldn't be surprised if I get flamed for equating RAD to lazy coding or lacking standards, best practice, or good architecture.  Unfortunately, it fits in the current usage.  Although, it's possible to create a good, maintainable application using the RAD methodology, it's just too ripe for abuse and requires too much discipline for someone let alone a team to do right. Sometimes it's just fun to throw caution to the wind and start slamming code.

    Read the article

  • Coding in large chunks ... Code verification skills

    - by Andrew
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Secure Coding Practices in .NET

    - by SoftwareSecurity
    Thanks to everyone who helped pack the room at the Fox Valley Day of .NET.   This presentation was designed to help developers understand why secure coding is important, what areas to focus on and additional resources.  You can find the slides here. Remember to understand what you are really trying to protect within your application.  This needs to be a conversation between the application owner, developer and architect.  Understand what data (or Asset) needs to be protected.  This could be passwords, credit cards, Social Security Numbers.   This also may be business specific information like business confidential data etc.  Performing a Risk and Privacy Assessment & Threat Model on your applications even in a small way can help you organize this process. These are the areas to pay attention to when coding: Authentication & Authorization Logging & Auditing Event Handling Session and State Management Encryption Links requested Slides Books The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software Threat Modeling Writing Secure Code The Web Application Hackers Handbook  Secure Programming with Static Analysis   Other Resources: OWASP OWASP Top 10 OWASP WebScarab OWASP WebGoat Internet Storm Center Web Application Security Consortium Events: OWASP AppSec 2011 in Minneapolis

    Read the article

  • Why use spaces instead of tabs for indentation? [closed]

    - by erenon
    Possible Duplicate: Are spaces preferred over tabs for indentation? Why do most coding standards recommend the use of spaces instead of tabs? Tabs can be configured to be as many characters wide as needed, but spaces can't. Example: Zend cs Pear cs Pear manual: This helps to avoid problems with diffs, patches, SVN history and annotations. How could tabs cause problems?

    Read the article

  • how to tackle a new project

    - by stevo
    Hi, I have a question about best practice on how to tackle a new project, any project. When starting a new project how do you go about tackling the project, do you split it into sections, start writing code, draw up flow diagrams. I'm asking this question because I'm looking for advice on how I can start new projects so I can get going on them quicker. I can have it planned, designed and starting coding with everything worked out. Any advice? Thanks Stephen

    Read the article

  • What's the best book for coding conventions?

    - by Joschua
    What's the best book about coding conventions (and perhaps design patterns), that you highly recommend (at best code samples in Python, C++ or Java)? It would be good, if the book (or just another) also covers the topics project management and agile software development if appropriate (for example how projects fail through spaghetti code). I will accept the answer with the book(s) (maximum two books per answer, please), that looks the most interesting, because the reading might take a while :)

    Read the article

  • How important is knowing functionality before coding?

    - by minusSeven
    I work for a software development company where the development work have been off shored to us. The on shore team handle the support and talk directly to the clients. We never talk to the clients directly we just talk people from the on shore team who talk directly to the clients. When requirements come, on shore team talk to the clients and make requirement documents and informs us. We make design documents after studying the requirements (we follow traditional waterfall model ). But there is one problem in the whole process: nobody in the either off-shore or on-shore understand the functionality of the application completely. We just know its a big complex web app handling complex order processing, catalog management, campaign management and other activities. We struggle with the design document as the requirements would not be clear. It then goes into a series of questions/answers back and forth between the on shore team,off shore team and clients. We would often be told to understand functionality from the code. But that's usually not feasible as the code base is huge and even understanding a simple menu item take days if not weeks. We tried telling the clients to give us knowledge transfer about the application but to no avail. Our manager would often tell us to start coding even if the design document is not complete or requirements not clear. We would start by coding part of the requirement that seems clear and wait for the rest. This usually would delay the deployment by a month. In extreme cases we would have very low errors in the development and production but the clients would say that's not what they asked. That would start a blame game and a series of change requests and we would end up developing something very different. My question is how would you do development work if you don't know the functionality of the app fully? UPDATE About development methodology it isn't really my choice and I am not my team's lead It is the way it began. I tried to tell people about the advantages of agile but to no avail. Besides I don't think my team has the necessary mindset to work in AGILE environment.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >