Search Results

Search found 8664 results on 347 pages for 'lost with coding'.

Page 5/347 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Which coding style is more common?

    - by Babiker
    In no way shape or form am i advertising/promoting my programming style, but as far as 'multiple variable declarations' are concerned, which case is more acceptable professionally and commonly: case 1: private $databaseURL = "localhost" ; private $databaseUName = "root" ; private $databasePWord = "" ; private $databaseName = "AirAlliance"; case 2: private $databaseURL = "localhost"; private $databaseUName = "root"; private $databasePWord = ""; private $databaseName = "AirAlliance"; The reason i like case 1 is because i can skim though it and see that all is correct way faster than case 2. Also i can visually get familiar with variable names witch makes it faster to work with them l latter on the program.

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • Objective PHP and key value coding

    - by Lukasz
    Hi Guys. In some part of my code I need something like this: $product_type = $product->type; $price_field = 'field_'.$product_type.'_price'; $price = $product->$$price_field; In other words I need kind of KVC - means get object field by field name produced at the runtime. I simply need to extend some existing system and keep field naming convention so do not advice me to change field names instead. I know something like this works for arrays, when you could easily do that by: $price = $product[$price_field_key]. So I can produce key for array dynamically. But how to do that for objects? Please help me as google gives me river of results for arrays, etc... Thank you

    Read the article

  • Coding Conventions - Naming Enums

    - by Walter White
    Hi all, Is there a document describing how to name enumerations? My preference is that an enum is a type. So, for instance, you have an enum Fruit{Apple,Orange,Banana,Pear, ... } NetworkConnectionType{LAN,Data_3g,Data_4g, ... } I am opposed to naming it: FruitEnum NetworkConnectionTypeEnum I understand it is easy to pick off which files are enums, but then you would also have: NetworkConnectionClass FruitClass Also, is there a good document describing the same for constants, where to declare them, etc.? Walter

    Read the article

  • In Java it seems Public constructors are always a bad coding practice

    - by Adam Gent
    This maybe a controversial question and may not be suited for this forum (so I will not be insulted if you choose to close this question). It seems given the current capabilities of Java there is no reason to make constructors public ... ever. Friendly, private, protected are OK but public no. It seems that its almost always a better idea to provide a public static method for creating objects. Every Java Bean serialization technology (JAXB, Jackson, Spring etc...) can call a protected or private no-arg constructor. My questions are: I have never seen this practice decreed or written down anywhere? Maybe Bloch mentions it but I don't own is book. Is there a use case other than perhaps not being super DRY that I missed? EDIT: I explain why static methods are better. .1. For one you get better type inference. For example See Guava's http://code.google.com/p/guava-libraries/wiki/CollectionUtilitiesExplained .2. As a designer of the class you can later change what is returned with a static method. .3. Dealing with constructor inheritance is painful especially if you have to pre-calculate something.

    Read the article

  • Huffman coding two characters as one

    - by Adomas
    Hi, I need huffman code(best in python or in java), which could encode text not by one character (a = 10, b = 11), but by two (ab = 11, ag = 10). Is it possible and if yes, where could i find it, maybe it's somewhere in the internet and i just can'd find it?

    Read the article

  • on coding style

    - by user12607414
    I vastly prefer coding to discussing coding style, just as I would prefer to write poetry instead of talking about how it should be written. Sometimes the topic cannot be put off, either because some individual coder is messing up a shared code base and needs to be corrected, or (worse) because some officious soul has decided, "what we really need around here are some strongly enforced style rules!" Neither is the case at the moment, and yet I will venture a post on the subject. The following are not rules, but suggested etiquette. The idea is to allow a coherent style of coding to flourish safely and sanely, as a humane, inductive, social process. Maxim M1: Observe, respect, and imitate the largest-scale precedents available. (Preserve styles of whitespace, capitalization, punctuation, abbreviation, name choice, code block size, factorization, type of comments, class organization, file naming, etc., etc., etc.) Maxim M2: Don't add weight to small-scale variations. (Realize that Maxim M1 has been broken many times, but don't take that as license to create further irregularities.) Maxim M3: Listen to and rely on your reviewers to help you perceive your own coding quirks. (When you review, help the coder do this.) Maxim M4: When you touch some code, try to leave it more readable than you found it. (When you review such changes, thank the coder for the cleanup. When you plan changes, plan for cleanups.) On the Hotspot project, which is almost 1.5 decades old, we have often practiced and benefited from such etiquette. The process is, and should be, inductive, not prescriptive. An ounce of neighborliness is better than a pound of police-work. Reality check: If you actually look at (or live in) the Hotspot code base, you will find we have accumulated many annoying irregularities in our source base. I suppose this is the normal condition of a lived-in space. Unless you want to spend all your time polishing and tidying, you can't live without some smudge and clutter, can you? Final digression: Grammars and dictionaries and other prescriptive rule books are sometimes useful, but we humans learn and maintain our language by example not grammar. The same applies to style rules. Actually, I think the process of maintaining a clean and pleasant working code base is an instance of a community maintaining its common linguistic identity. BTW, I've been reading and listening to John McWhorter lately with great pleasure. (If you end with a digression, is it a tail-digression?)

    Read the article

  • I can't program because the code I am using uses old coding styles. Is this normal to programmers? [closed]

    - by Renato Dinhani Conceição
    I'm in my first real job as programmer, but I can't solve any problems because of the coding style used. The code here: Does not have comments Does not have functions (50, 100, 200, 300 or more lines executed in sequence) Uses a lot of if statements with a lot of paths Has variables that make no sense (eg.: cf_cfop, CF_Natop, lnom, r_procod) Uses an old language (Visual FoxPro 8 from 2002), but there are new releases from 2007. I feel like I have gone back to 1970. Is it normal for a programmer familiar with OOP, clean-code, design patterns, etc. to have trouble with coding in this old-fashion way? EDIT: All the answers are very good. For my (un)hope, appears that there are a lot of this kind of code bases around the world. A point mentioned to all answers is refactor the code. Yeah, I really like to do it. In my personal project, I always do this, but... I can't refactor the code. Programmers are only allowed to change the files in the task that they are designed for. Every change in old code must be keep commented in the code (even with Subversion as version control), plus meta informations (date, programmer, task) related to that change (this became a mess, there are code with 3 used lines and 50 old lines commented). I'm thinking that is not only a code problem, but a management of software development problem.

    Read the article

  • What is the best aproach for coding in a slow compilation environment

    - by Andrew
    I used to coding in C# in a TDD style - write/or change a small chunk of code, re-compile in 10 seconds the whole solution, re-run the tests and again. Easy... That development methodology worked very well for me for a few years, until a last year when I had to go back to C++ coding and it really feels that my productivity has dramatically decreased since. The C++ as a language is not a problem - I had quite a lot fo C++ dev experience... but in the past. My productivity is still OK for a small projects, but it gets worse when with the increase of the project size and once compilation time hits 10+ minutes it gets really bad. And if I find the error I have to start compilation again, etc. That is just purely frustrating. Thus I concluded that in a small chunks (as before) is not acceptable - any recommendations how can I get myself into the old gone habit of coding for an hour or so, when reviewing the code manually (without relying on a fast C# compiler), and only recompiling/re-running unit tests once in a couple of hours. With a C# and TDD it was very easy to write a code in a evolutionary way - after a dozen of iterations whatever crap I started with was ending up in a good code, but it just does not work for me anymore (in a slow compilation environment). Would really appreciate your inputs and recos. p.s. not sure how to tag the question - anyone is welcome to re-tag the question appropriately. Cheers.

    Read the article

  • Most effective work habit for coding? [on hold]

    - by Cris
    Working on a big solo project (~15,000 LOC), I am encountering the following phenomenon: I seem to work best when I program in short bursts of 10-15 minutes. Right now I am working on a section which is a complete first time for me architecturally and if I have any architectural issues that emerge when doing the implementation, I seem to be able to best serve these by taking a total break. Then, later, sketching out the ideas on some paper. And when I feel I have sufficient clarity, then going back to code. This iterates until that architectural issue for that section is resolved. This seems quite counter intuitive: that I can progress more quickly by coding less, and taking more breaks. I am nearing the end of the sections which are "first times" for me, and about to dive into stuff which I am much more familiar and am wondering if this counter intuitive efficiency will continue. So my question is: even for regular coding of sections one is familiar with, which don't require constant re-clarification of the best architecture, is more progress to be attained by taking more breaks and coding in bursts?

    Read the article

  • Lost Windows 7 files

    - by Pader
    My intention was to have a dual boot system with Ubuntu and Windows 7. Obviously I did something wrong because although I had a system menu on booting (is it normal to appear DOS-like?) which gave me an option of booting into windows 7, I was unable to do so. Also, when I booted into Ubuntu, my Windows 7 drive was not available. The Windows 7 drive was an internal 1TB drive partitioned into a 200GB (OS) and a second partition making up the remainder. I was still unable to access this Windows 7 drive even after deleting Ubuntu as I kept getting an 'requires an NTFS drive' error, or something similar. I could not even re-install Windows 7 as the disk was not recognised. I did eventually get the drive back by but I cannot for the life of me remember how. I did try to recover my lost W7 data using Ontrack Easy Recovery (which has always been succesfull in the past for post format recovery) but it would not recognise the 1TB although it was now formatted as NTFS. From other posts on this site, I gather that this is considered a 'Windows 7 Site' problem by Linux users. However, I would dearly love to recover some of my lost Windows 7 files. I had resigned myself to a lot of lost personal data but I happened to notice that a 2TB drive I had connected through a USB docking station had been repartitioned. It must have happened when I installed Ubuntu as I can think of no other explanation. I certainly do not remember consciously requiring Ubuntu to do this. The additional two partitions on the 2TB drive, the original Windows

    Read the article

  • why those chinese indent code so differently?

    - by winston
    currently i am working with some programmer from shanghai i notice they have some coding indentation like these: if(1==1 && 2==2) { a = 3; } else { b = 4; } however i am accustomed to: if (1==1 && 2==2) { a = 3; } else { b = 4; } what do you think? how could i get rid of different coding styles within a single program file?

    Read the article

  • The Lost Episode of Cosmos: The Meat Planet

    - by Jason Fitzpatrick
    In the 1980s Carl Sagan captivated TV viewers with his exploration of the universe; we present to you, a lost episode, The Meat Planet. Creators of the parody video, Darren Cullen and Mark Tolson, engaged in some expert splicing and dicing of past Cosmos episodes to create their masterpiece: the lost episode focused on the fabled Meat Planet. Watch the episode above or hit up the link below for more information about the project. Meat Planet [via Boing Boing] How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • HTML coding style: attribute starts on a new line

    - by Matty
    sublvl's front end developer seems to have a strange coding style that I've never seen before. Every time they begin a new element, immediately after the element name they insert a line break. The first thing that appears on the next line is the first attribute of the element. For example: id="player-container"><div id="player-bar"><div id="player-controls-wrapper"><div id="player-controls"><div id ="player-controls-buttons"> <a The above code was found here. I've never seen this kind of coding style before. What's going on here? Is this just a quirky style or is there some reasoning behind it?

    Read the article

  • How to organize a Coding Dojo?

    - by Stephan
    Over on stack overflow it was asked how to organize a coding dojo (http://stackoverflow.com/questions/4338567/how-to-organize-a-coding-dojo-event). I believe that may have been the wrong forum... I wonder the same thing: how is a Codeing Dojo organized? What is the structure of a meeting? How would one pick Katas? What do you plan ahead of time? I am interested in any ideas on this as well as links to any resource that may be outlining this.

    Read the article

  • Can coding style cause or influence memory fragmentation?

    - by Robert Dailey
    As the title states, I'd like to know if coding style can cause or influence memory fragmentation in a native application, specifically one written using C++. If it does, I'd like to know how. An example of what I mean by coding style is using std::string to represent strings (even static strings) and perform operations on them instead of using the C Library (such as strcmp, strlen, and so on) which can work both on dynamic strings and static strings (the latter point is beneficial since it does not require an additional allocation to access string functions, which is not the case with std::string). A "forward-looking" attitude I have with C++ is to not use the CRT, since to do so would, in a way, be a step backwards. However, such a style results in more dynamic allocations, and especially for a long living application like a server, this causes some speculation that memory fragmentation might become a problem.

    Read the article

  • What's the most productive coding environment

    - by Ubiguchi
    I was speaking with an ex-colleague the other day about the most productive way to write code and he said he found it best "to CIMP, or Code In My Pants". When I asked him exactly what he meant, he explained he found it best to work at home, coding at his own pace, dressed comfortably (in his pants), and communicating with his team through emails, IM, or the telephone. Digesting his approach (which he describes to clients as the Complete Integrated Method of Programming), I realised my coding is also more productive when working in an isolated environment, which made me wonder if the software industry has got it all wrong and should development be really done by dispersed teams of individuals, or are there advantages to geographical herding that make up for the added interruptions it brings? So has business got it wrong? Should development occur predominantly across geographically isolated individuals to increase productivity, or are there real reasons why herding developers together makes sense?

    Read the article

  • Using "Google Guava" in coding interviews

    - by kbgn27
    I attended a in-person interview recently and performed well. But surprisingly I got rejected. When I asked the HR for reason, he contacted the technical interviewer and told me that I was syntactically wrong while coding. I used Google Guava for coding. So my code looked like this: List<String> items = Lists.newArrayList() instead of List<String> items =new ArrayList<String>(); I know that the code will compile and work as expected.Is it ok to use third party libraries like Google Guava in interviews?

    Read the article

  • Lost in Code?

    - by Geertjan
    Sometimes you're coding and you find yourself forgetting your context. For example, look at this situation: The cursor is on line 52. Imagine you're coding there and you're puzzling on some problem for some time. Wouldn't it be handy to know, without scrolling up (and then back down again to where you were working), what the method signature looks like? And does the method begin two lines above the visible code or 10 lines? That information can now, in NetBeans iDE 7.3 (and already in the 7.3 Beta) very easily be ascertained, by putting the cursor on the closing brace of the code block: As you can see, a new vertical line is shown parallel to the line numbers, connecting the end of the method with its start, as well as, at the top of the editor, the complete method signature, together with the number of the line on which it's found. Very handy. Same support is found for other file types, such as in JavaScript files.

    Read the article

  • How to re-focus to a text field when focus is lost on a HTML form?

    - by Horace Ho
    There is only one text field on a HTML form. Users input some text, press Enter, submit the form, and the form is reloaded. The main use is barcode reading. I use the following code to set the focus to the text field: <script language="javascript"> <!-- document.getElementById("#{id}").focus() //--> </script> It works most of the time (if nobody touches the screen/mouse/keyboard). However, when the user click somewhere outside the field within the browser window (the white empty space), the cursor is gone. One a single field HTML form, how can I prevent the cursor from getting lost? Or, how to re-focus the cursor inside the field after the cursor is lost? thx!

    Read the article

  • Lazy coding is fun

    - by Anthony Trudeau
    Every once in awhile I get the opportunity to write an application that is important enough to do, but not important enough to do the right way -- meaning standards, best practices, good architecture, et al.  I call it lazy coding.  The industry calls it RAD (rapid application development). I started on the conversion tool at the end of last week.  It will convert our legacy data to a completely new system which I'm working on piece by piece.  It will be used in the future, but only the new parts because it'll only be necessary to convert the individual pieces of the data once.  It was the perfect opportunity to just whip something together, but it was still functional unlike a prototype or proof of concept.  Although I would never write an application like this for a customer (internal or external) this methodology (if you can call it that) works great for something like this. I wouldn't be surprised if I get flamed for equating RAD to lazy coding or lacking standards, best practice, or good architecture.  Unfortunately, it fits in the current usage.  Although, it's possible to create a good, maintainable application using the RAD methodology, it's just too ripe for abuse and requires too much discipline for someone let alone a team to do right. Sometimes it's just fun to throw caution to the wind and start slamming code.

    Read the article

  • Coding in large chunks ... Code verification skills

    - by Andrew
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >