Search Results

Search found 6690 results on 268 pages for 'worst practices'.

Page 37/268 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Am I bored with programming? [closed]

    - by user1167074
    I have started programming 2 years back and I have learnt web programming while working for big corporate companies. I was very passionate and I even did couple of side projects which were well appreciated by my friends and colleagues. But for the past 2 months I am not doing anything really interesting with programming, even if I get good ideas I am not feeling like coding, sub consciously I am feeling like "So What?" if I do this project. I would like to know from the more experienced programmers if this is just a phase or am I really missing something? Thanks

    Read the article

  • Pros and cons of creating a print friendly page to remove the use of pdfs?

    - by Phil
    the company I work for has a one page invoice that uses the library tcpdf. they wanted to do some design changes that I found are just incredibly difficult for setting up in .pdf format. Using html/css I could easily create the page and have it print very nicely, but I have a feeling that I am over looking something. What are the pros and cons of setting up a page just for printing? What are the pros and cons of putting out a .pdf? I could also use the CSS inline so that if they wanted to download it and open it they could.

    Read the article

  • Make a flowchart to demonstrate closure behavior

    - by thomas
    I saw below test question the other day in which the author's used a flow chart to represent the logic of loops. And I got to thinking it would be interesting to do this with some more complex logic. For example, the closure in this IIFE sort of boggles me. while (i <= qty_of_gets) { // needs an IIFE (function(i) promise = promise.then(function(){ return $.get("queries/html/" + product_id + i + ".php"); }); }(i++)); } I wonder if seeing a flowchart representation of what happens in it could be more elucidating. Could such a thing be done? Would it be helpful? Or just messy? I haven't the foggiest clue where to start, but thought maybe someone would like to take a stab. Probably all the ajax could go and it could just be a simple return within the IIFE.

    Read the article

  • Is it bad practice to call a controller action from a view that was rendered by another controller?

    - by marco-fiset
    Let's say I have an OrderController which handles orders. The user adds products to it through the view, and then the final price gets calculated through an AJAX call to a controller action. The price calculation logic is implemented in a seperate class and used in a controller action. What happens is that I have many views from different controllers that need to use that particular action. I'd like to have some kind of a PriceController that I could call an action on. But then the view would have to know about that PriceController and call an action on it. Is it bad practice for a view to call an action on a different controller from which it was rendered?

    Read the article

  • ViewController in programming

    - by Vishwas Gagrani
    ViewController is a term for classes that handle views in a framework. This is especially used in MVC frameworks. I go through various projects, written by various programmers, who implement MVC in different ways. Especially, i get confused, about the relation between the MainView ( parent view ) and some CustomView ( widget etc) in the framework. I personally pass reference of the MainView into the ViewController to be instantiated. All the subviews of ViewController are added to that reference of MainView. Additionally, ViewController itself is added as a child of MainView. Like this : Want to know, if this is the right way to relate each other ?

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • How to get started in coding for JBoss

    - by Mister IT Guru
    I have an idea on how to revamp our internal application, after having accessed the needs of the users, addressing thier current issues, and the like. But I am not a coder. My last application I wrote was in college, in C, (java wasn't invented-ish!) and it was a booking system, with the option to add on other modules, blah blah. I got an A, but I became a system administrator instead, more intrested in designing and maintainend networks and infrastructure, but with the advent of virtualisation, and linux management tools such as puppet I can now manage infrastructure in my sleep! Now I want to write code - to put on my infastructure, and I want to build .... a booking system! This is just to get experience, but I am at a loss as to where to start. Setting up the environment, will take me about a day. Writing the spec, even how I want it to work, I already know, but as for actually coding in a decent manner, I can only guess. If anyone can recommend a book, website, blog, twitter person to follow, or just advice on how to build a kick butt basic jboss app, then please, "I AM READY TO LEARN" :)

    Read the article

  • Test driven development - convince me!

    - by Casebash
    I know some people are massive proponents of test driven development. I have used unit tests in the past, but only to test operations that can be tested easily or which I believe will quite possibly be correct. Complete or near complete code coverage sounds like it would take a lot of time. What projects do you use test-driven development for? Do you only use it for projects above a certain size? Should I be using it or not? Convince me!

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Catching typos or other errors in web-based scripting languages

    - by foreyez
    Hi, My background is mainly strongly typed languages (java, c++, c#). Having recently gotten back to a bit of javascript, I found it a bit annoying that if I misspell something by accident (for example I'll type 'myvar' instead of 'myVar') my entire script crashes. The browser itself most of the time doesn't even tell me I have an error, my program will just be blank, etc. Then I have to hunt down my code line by line and find the error which is very time consuming. In the languages I am used to the compiler lets me know if I made a typo. My question to you is, how do you overcome this issue in scripting (javascript)? Can you give me some tips? (this question is mainly aimed at people that have also come from a strongly typed language). Note: I mainly use the terminal/VIM ... this is mainly b/c I like terminal and I SSH alot too

    Read the article

  • Is there such a thing as having too many private functions/methods?

    - by shovonr
    I understand the importance of well documented code. But I also understand the importance of self-documenting code. The easier it is to visually read a particular function, the faster we can move on during software maintenance. With that said, I like to separate big functions into other smaller ones. But I do so to a point where a class can have upwards of five of them just to serve one public method. Now multiply five private methods by five public ones, and you get around twenty-five hidden methods that are probably going to be called only once by those public ones. Sure, it's now easier to read those public methods, but I can't help but think that having too many functions is bad practice.

    Read the article

  • Can a candidate be judged by asking to write a complex program on "paper"?

    - by iammilind
    Sometime back in an interview, I was asked to write following program: In a keypad of a mobile phone, there is a mapping between number and characters. e.g. 0 & 1 corresponds to nothing; 2 corresponds to 'a','b','c'; 3 corresponds to 'd','e','f'; ...; 9 corresponds to 'w','x','y','z'. User should input any number (e.g. 23, 389423, 927348923747293) and I should store all the combinations of these character mapping into some data structure. For example, if user enters "23" then possible character combinations are: ad, ae, af, bd, be, bf, cd, ce, cf or if user enters, "4676972" then it can be, gmpmwpa, gmpmwpb, ..., hnroxrc, ..., iosozrc Interviewer told that people have written code for this within 20-30 mins!! Also he insisted I have to write on paper. If I am writing a code then my tendency is as of I am writing production code, even though it may not be expected from me. So, I always try to think all the aspects like, optimization, readability, maintainability, extensible and so on. Considering all these, I felt that I should be writing on PC and it needs decent 2 hours. Finally after 25 mins, I was able to come up with just the concept and some shattered pieces of code (not to mention of my rejection). My question is not the answer for the above program. I want to know that is this a right way to judge the caliber of a person ? Am I wrong / too slow in the estimates ? Am I too idealistic ?

    Read the article

  • Should all, none, or some overridden methods call Super?

    - by JoJo
    When designing a class, how do you decide when all overridden methods should call super or when none of the overridden methods should call super? Also, is it considered bad practice if your code logic requires a mixture of supered and non-supered methods like the Javascript example below? ChildClass = new Class.create(ParentClass, { /** * @Override */ initialize: function($super) { $super(); this.foo = 99; }, /** * @Override */ methodOne: function($super) { $super(); this.foo++; }, /** * @Override */ methodTwo: function($super) { this.foo--; } }); After delving into the iPhone and Android SDKs, I noticed that super must be called on every overridden method, or else the program will crash because something wouldn't get initialized. When deriving from a template/delegate, none of the methods are supered (obviously). So what exactly are these "je ne sais quoi" qualities that determine whether a all, none, or some overriden methods should call super?

    Read the article

  • Do I have to write a lot of boilerplate code if I keep working using Java?

    - by edem
    I'm working for a company writing ERP applications. My problem is that I have to write tons of boilerplate code. I came up with ideas to automatize/prevent the drudgery but only some of them were accepted. I have been told by the lead developer that my ideas tend to be go far afield and I should write code everyone can understand. I had a discussion about this lately and it seems to me that this kind of code ramp is within java's philosophy. I have to write lots of code to achiveve simple things not because it is necessary but because this is the way most of the people at the company think. Is this universally applicable to most of the companies out there using java or this is just my company's view? Do I have to get used to the drudgery if I keep working for java-based firms?

    Read the article

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • How do you keep track of the authors of code?

    - by dustyprogrammer
    This is something I was never taught. I have seen alot of different types of authoring styles. I code primarily in Java and Python. I was wondering if there was a standard authoring style or if everything is freestyle. Also if you answer would you mind attaching the style you use to author files that your create at home or at work. I usually just go @author garbagecollector @company garbage inc.

    Read the article

  • Getting graduates up to speed?

    - by Simon
    This question got me thinking about how comapnies deal with newly-hired graduated. Do experienced programmers expect CS graduates to write clean code (by clean I mean code easily understandable by others — maybe that is too much to expect?) Or do significant portion of graduates at your place (if any) just end up testing and fixing small bugs on existing applications? And, even if they do bug fixes, do you end up spending double the amount of time just checking they did not end up breaking anything and creating new bugs? How do you deal with such scenarios when pair programming and code reviews are not available options (for reasons such as personal deadlines), and also what techniques did you find to get fresh graduate up to speed? Some suggestions would be great.

    Read the article

  • Sucking Less Every Year ?

    - by AdityaGameProgrammer
    Sucking Less Every Year A trail of thought that had been on my mind for a while Quoting directly from the post I've often thought that sucking less every year is how humble programmers improve. You should be unhappy with code you wrote a year ago. If you aren't, that means either A) you haven't learned anything in a year, B) your code can't be improved, or C) you never revisit old code. All of these are the kiss of death for software developers. How often does this happen or not happen to you? How long before you see an actual improvement in your coding ? month, year? Do you ever revisit Your old code? How often does your old code plague you? or how often do you have to deal with your technical debt. It is definitely very painful to fix old bugs n dirty code that we may have done to quickly meet a deadline and those quick fixes ,some cases we may have to rewrite most of the application/code. No arguments about that. Some of the developers i had come across argued that they were already at the evolved stage where their coding doesn't need improvement or cant get improved anymore. Does this happen? If so how many years into coding on a particular language does one expect this to happen?

    Read the article

  • Is using Javascript/JQuery for layout and style bad practice?

    - by Renesis
    Many, but not all, HTML layout problems can be solved with CSS alone. For those that can't, JQuery (on document load) has become very popular.* As a result of its ease, many developers are quick to use JQuery or Javascript for layout and style — even without understanding whether or not the problem can be solved with CSS alone. This is illustrated by responses to questions like this one. Is this bad practice? What are the arguments for/against? Should someone who sees this in practice attempt to persuade those developers otherwise? If so, what are the best responses to arguments in favor of JQuery saying it's "so easy"? * Example: Layouts that wish to use vertical layout flow of some kind often run into dead ends with CSS alone — this would include layouts similar to Pinterest, though I'm not sure that's actually impossible with CSS.

    Read the article

  • Reflective practice in programming using keystroke playback

    - by Graham
    I'm thinking of applying Reflective Practice to improving my programming skills. To that end, I want to be able to watch myself writing code. In general, what is a good method for applying Reflective Practice to the craft of programming? In particular, if it's a good idea, is there an editor that records keystrokes then plays them back at a later time - possibly running the keys together without delays, or replaying at a 2x/4x/8x accelerated rate? Screencasting with RecordMyDesktop is an option, but has downsides of waiting for encoding and ending up with a big video file instead of a list of keystrokes.

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • What do you think about RefactoringManifesto.org?

    - by Gan
    Quite some time ago, on December 19 2010, a site called RefactoringManifesto.org was launched. The site is to voice concerns about refactoring. It lists ten main points as shown below (head over the website to see more details): Make your products live longer! Design should be simple so that it is easy to refactor. Refactoring is not rewriting. What doesn't kill it makes it stronger. Refactoring is a creative challenge. Refactoring survives fashion. To refactor is to discover. Refactoring is about independence. You can refactor anything, even total crap. Refactor – even in bad times! What do you think about this? Would you sign the manifesto? If not, why?

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Harmful temptations in programming

    - by gaearon
    Just curious, what kinds of temptations in programming turned out to be really harmful in your projects? Like when you really feel the urge to do something and you believe it's going to benefit the project or else you just trick yourself into believing it is, and after a week you realize you haven't solved any real problems but instead created new ones or, in the best case, pleased your inner beast with no visible impact. Personally, I find it very hard to not refactor bad code. I work with a lot of bad legacy code, and it takes some deep breaths to not touch it when I have no tests to prove my refactoring doesn't not break anything. Another demon for me in user interface, I can literally spend hours changing UI layout just because I enjoy doing it. Sometimes I tell myself I'm working on usability, but the truth is just I love moving buttons around. What are your programming demons, and how do you avoid them?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >