Search Results

Search found 4568 results on 183 pages for 'programmer efficiency'.

Page 8/183 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Is case after case in a switch efficient?

    - by RandomGuy
    Just a random question regarding switch case efficiency in case after case; is the following code (assume pseudo code): function bool isValid(String myString){ switch(myString){ case "stringA": case "stringB": case "stringC": return true; default: return false; } more efficient than this: function bool isValid(String myString){ switch(myString){ case "stringA": return true; case "stringB": return true; case "stringC": return true; default: return false; } Or is the performance equal? I'm not thinking in a specific language but if needed let's assume it's Java or C (for this case would be needed to use chars instead of strings).

    Read the article

  • Looking for suggestions: becoming a hireable, young programmer [closed]

    - by Dan
    I am a 17 year old Java programmer that has filled the last year with learning all of the ins and outs of Java - Using Eclipse, and the help of a friend of the family (a Java programming architect for some company), I have learned everything from serializing objects, basic networking, generics, reflection, multi-threading, code optimization and efficiency & some concurrency safety - built my own proxy class, and nowadays, I answer questions on Project Euler. I am seeking some suggestions though on where I go next, or where I go from here to get a job in programming. I dedicate at least an hour every day to coding, sometimes literally, the entire day, and I really have come to love the process. I just started reading Effective Java (v2), and learning Scala (as I see often, possibly the Java replacement) I will be going to college for Computer Science next year - and taking AP computer science this year (however, I took a practice exam and got an 87, only need a 60to70 to pass, so no need to study for it too much) -- I was wondering if getting the SE 7 OCA and OCP would help me in trying to get a programming job. I looked around and most people have said online that an OCA/OCP are practically useless, but, at my age do they make me any more credible? More or less, what would you recommend to get a job in programming these days - or distinguish yourself from the crowd? I have enough time and dedication to learn another language, or anything really. Thank you very much.

    Read the article

  • Should a Python programmer learn Ruby?

    - by C J
    Hi! I have been a Python programmer for around 1.5 years (one internship + side projects), so I am comfortable with the language. Given that everyone is talking about Ruby these days, and I mean seriously! No one bothers about Python (from what I've seen). See GitHub. All RoR. I apply for a job and they ask me about RoR. I look at the screencasts on peepcode.com and they are in Ruby. gitimmersion.com has all the tutorial in Ruby! I know this is pretty vague, but still... why Ruby! Everyone these days is obssessed with RoR! Why not Python? Anyways, my questions are: Should I learn Ruby? Is learning Ruby when knowing Python be, er, complicated for me? Or is it going to be just like learning any other language? Thanks!

    Read the article

  • Programmer friendly non-voxel art styles?

    - by Overv
    Like many other programmers I've always wanted to make a game, but simply lack the skills to do any production quality graphics. I am however sure that I want to do the models and textures myself, because I need a lot of different objects and I am sure I wouldn't be able to find good matching models on 3D sites. That means I'll have to pick an art style that is "simple", programmer friendly. An extreme example of this is of course Minecraft, but I don't want to go that basic. I'm absolutely against creating a voxel game. What kind of art styles are out there that are relatively simple, i.e. things made out of basic shapes and textures, but are still good enough to form a believable and detailed world? An example of what I mean is wind waker. The objects are formed of relatively simples shapes, but still provide enough detail to create a nice, living world. The environment my game is set in is a city environment. What I'm really asking for here are good examples of "simple" art styles applied in practice, so I can choose one that fits my skills.

    Read the article

  • How to explain pointers to a Java/VB programmer

    - by Skeith
    I am writing a game and my friend has offered to help me as it is a RPG and will take a long time to do the "scripting" bit of the game. The problem is IMO he's not that good a programmer :( (add flame war here). He has only programmed in Java and VB and keeps saying really stupid things to me like "Why don't you drag and drop an onClick event" to design my UI when I'm using DirectX. I tried explaining pointers to him but his response was, if it's just a variable that holds a memory address, why don't you just use an int? I create an instance of an attack class and give the creature a pointer to it so if several creatures use the same attack there is only one instance of it. He keeps saying why not put if statements in the creature class for every attack class and set true for the ones that are there. He has programmed mainly in VB and a little in Java just to learn OOP. How can I explain advanced C++ concepts like pointers and memory management to him? He just doesn't understand there are no super functions like form.show in C++.

    Read the article

  • What tools should every programmer know?

    - by acidzombie24
    What are some tools every programmer know about? Some examples i thought were Source control. (No explanation needed) Profiler. Many could go without but its good to know how to use one when the occasion arise What else? I was thinking a bug report software but i havent used one so i wasnt sure. Should programmers know how to use TRAC? I remember in the past a person telling me if i was making a shared library i should know (Some Name) which generates docs from the source (in that case C++). What was that called and what else should i know about? -edit- what about team management software? any software you could not live without in a specific project would be a good mention. I'll also mention i use VMs frequently during the prototype or end phase to see if there are any issues on a clean XP or linux distro and if i forgot anything in my redistribution. I cant imagine the end/release and testing phase of a project without a VM.

    Read the article

  • Does an inexperienced programmer need an IDE?

    - by Torben Gundtofte-Bruun
    Reading this other question makes me wonder if I (as an absolute beginner PHP programmer) should stick with WAMP and Notepad++ or to switch to some IDE like Eclipse. It's understandable that skilled developers will benefit from a big shiny IDE. But why should an absolute beginner use an IDE? Do the benefits outweigh the extra challenge of learning the IDE on top of learning to develop? Update for clarification: My goal is to get some basic programming experience. By choosing PHP and WAMP (and FogBugz and Kiln) I hope to avoid having to navigate the tricky / messy OS specifics and compiling etc. and just focus on basic functionality like an online user registration form. I've got lots of theoretical understanding from university a decade ago but no practical experience. I want to remedy that with a hobby project that would be similar to a real-world sellable web app. There are so many questions to ask. So many pitfalls I probably have to blunder into. This question is just one piece (my first!) of that puzzle.

    Read the article

  • Learning OO for a C Programmer

    - by Holysmoke
    I've been programming professionally in C, and only C, for around 10 years in a variety of roles. As would be normal to expect, I understand the idioms of the language fairly well and beyond that also some of the design nuances - which APIs to make public, who calls what, who does what, what is supposed to reentrant and so on. I grew up reading 'Writing Solid Code', it's early C edition, not the one based on C++. However, I've never ever programmed in an OO language. Now, I want to migrate to writing applications for iPhone (maybe android), so want to learn to use Objective-C and use it with a degree of competence fitting a professional programmer. How do I wrap my head around the OO stuff? What would be your smallest reading list suggestion to me. Is there a book that carries some sort of relatively real world example OO design Objective-C? Besides, the reading what source code would you recommend me to go through. How to learn OO paradigm using Objective-C?

    Read the article

  • How to tackle an experienced C# Programmer?

    - by nandu.com
    I am a noob in c# and asp.net developing. I have spent 6 months in design and another 6 in sql and asp.net programming. I just know the basics of asp.net and C#. I was programming as per the instruction of my tech leads and all good things changed in a day. :( All my tech leads (2+ experienced) left the company complaining about salary. And instead of those, company has recruited a 5+ experienced programmer cum tech lead (who is very strict), he is expecting me to code anything he says. Previous seniors of me, would say 'use ajax for this, use query for this instead of coding' and so on. I will do it exactly. I am not experienced enough to perform it myself. Now I am in a dilemma. I want to stay in the company and learn some more, but this new tech lead is expecting me to learn everything myself (he is telling me to learn jquery, javascript menus, session and chart in .Net, and so on and do things myself without asking him anything...I mean anything) :(((( PLease suggest to me some good tips to handle him. I think all programmers world wide would have faced a similar problem atleast once in the big programming life. So please..help .. 911

    Read the article

  • How does a single programmer make a game?

    - by Mike
    I have always been a software developer, but lately I've been wanting to get into games. The only thing stopping me is the fact that I'm a programmer, not an artist. I've made some simple stuff, Tetris, 2D chess things like that but I can't do much art and that's really what holds me back. Now the problem is, I've yet to go to college so most commercial projects wouldn't accept me even to work for free and learn a bit especially with my lack of experience in games and any indie projects I've looked into really have an issue with responding to people interested, or actually completing (or starting really, most don't get past the ideas on paper) the project they want to do. I've looked around locally for artists, anyone who can do modeling, textures or animating or even anyone with some ability to make some more advanced 2D assets to get something like a side-scrolling RPG or something but haven't been able to find anyone. So how do you guys do it? Do I really just have to wait until I can go to college to see if I like working with games or is there some way I can get art (for free, anything I do is just going to be for fun so I don't want to have to sink money into it) and just start messing around on my own? Or am I just having bad luck and not looking in the right places for other people interested in having me help? I'm not looking for anything in particular, just something to fill some time with and see if I like making games. If not, well I'll go back to my software projects. I just have one more year of highschool and I'd like to try a few different areas before I go to college.

    Read the article

  • Shoring up deficiencies in a "home grown" programmer?

    - by JohnP
    I started out by teaching myself BASIC on a Vic 20, and in college (mid 80's) I had Fortran, Pascal, limited C, machine and assembler (With a smattering of COBOL). I didn't touch programming from approx 1989 to 1999. At that point, I was lucky enough to get hired as a Clipper programmer. Took me about 6 months to learn most of it, and by now (13 yrs) I'm pretty expert in it. I have also picked up Cold Fusion, some C#, some ASP, SQL, etc. I know programming structures, but in most languages I'm missing the esoterics, and I know my code could be much tighter. The problem is that I've learned what I needed to, to get the job done. This results in a lot of gaps in practical knowledge. I am also missing out on a TON of theory. Things like SRP, Refactoring, etc are alien terms. (Although I grok the intent after a short read). In addition, I am in the position now of teaching junior programmers the company and our software, and I don't want to pass on the knowledge gaps. I know this is somewhat of a subjective question and may be closed, but how do you go back and pick up what you've missed?

    Read the article

  • Need Directions to become a programmer [closed]

    - by Omin
    Before youguys go on about how there are many types of programmers, please read through the post. Long term goal: Develop my own software (company) Short term goal: Get a job that involves coding/programming Current status: Support Analyst (at a software company but does not involve any programming) with 40k salary, 3rd year computer engineering student I had everything figured out. I'm going to develop a 2D scrolling game for iphone or android, publish the app, sell a bunch, and then apply at a studios as a software developer. And then something hit me. I think I need to get a job that involes programming to learn as much as I can in the shortest time possible. So I got a phone interview at a fast growing start up software company, passed that no problem, but then had to take an online technical assessment. That failed miserably. I thought that if I could just present myself, show that I am hard working, positive attitude, eager to make self improvements, type of a guy, I could get the job. I was wrong. And now, I am lost. Im thinking of staying with my job until I find a new one as a programmer. I will be working, self studying, and trying to make this happen without finishing university. I forgot to mention that the online technical assessment was based on data structures/algorithms, OO design, runtime complexity. I was hoping that I could get some guidence. Should I be focusing on app development or study computer science fundamentals? I have a list of books I can be going through: Learning C# O'Reilly (I got interested in C# because of Unity3D and Mono), C# 5.0 in a Nutshell, Head First Design Patterns, Code Complete, Introduction to Algorithms, Programming Interviews Exposed, Cracking the Coding Interview, The Google Resume.

    Read the article

  • How to move from Programmer to Project Lead

    - by DoctaStooge
    At my job, I'm currently a programmer, but in the next few weeks I'll be taking control my own project. I was wondering if anyone else here has been in the same situation, and if so, what advice you can offer to help me be able to better run my project. Experience in dealing with contractors would be greatly appreciated. A little more info: Project will have 3 people including myself, with extra people coming in when needing testing. The project has been programmed mainly by 2 people I would like to contribute to the programming as I like doing it and think I can add to the program, but am afraid of how the contractors will react. I don't want to create bad feelings which may harm the project. EDIT: Forgot to mention that I'll have to be picking up communications with customers to make sure their needs are met. Any advice on talking to customers cold would be greatly appreciated. EDIT 2: This is not a new project, I'm picking it up around version 6. Sorry that I didn't make it clear before.

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess, and In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C. Those people have jobs that require them to write low to mid-level code. I'm sure C is awesome. I'm sure there are a few bad programmers who know C. My question is, why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language--say Java, Pascal, PHP, or Javascript--truely benefitted from a prior knowledge of C? Examples would be most appreciated. (revised to better coincide with the six guidelines.)

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language—say Java, Pascal, PHP, or Javascript—truely benefitted from a prior knowledge of C? Examples would be most appreciated.

    Read the article

  • Why are marketing employees, product managers, etc. deserving of their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Why do marketing employees get their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Is Python Interpreted or Compiled?

    - by crodjer
    This is just a wondering I had while reading about interpreted and compiled languages. Ruby is no doubt an interpreted language, since source code is compiled by an interpreter at the point of execution. On the contrary C is a compiled language, as one have to compile the source code first according to the machine and then execute. This results is much faster execution. Now coming to Python: A python code (somefile.py) when imported creates a file (somefile.pyc) in the same directory. Let us say the import is done in a python shell or django module. After the import I change the code a bit and execute the imported functions again to find that it is still running the old code. This suggests that *.pyc files are compiled python files similar to executable created after compilation of a C file, though I can't execute *.pyc file directly. When the python file (somefile.py) is executed directly ( ./somefile.py or python somefile.py ) no .pyc file is created and the code is executed as is indicating interpreted behavior. These suggest that a python code is compiled every time it is imported in a new process to crate a .pyc while it is interpreted when directly executed. So which type of language should I consider it as? Interpreted or Compiled? And how does its efficiency compare to interpreted and compiled languages? According to wiki's Interpreted Languages page it is listed as a language compiled to Virtual Machine Code, what is meant by that? Update Looking at the answers it seems that there cannot be a perfect answer to my questions. Languages are not only interpreted or only compiled, but there is a spectrum of possibilities between interpreting and compiling. From the answers by aufather, mipadi, Lenny222, ykombinator, comments and wiki I found out that in python's major implementations it is compiled to bytecode, which is a highly compressed and optimized representation and is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Also the the languages are not interpreted or compiled, but rather language implementations either interpret or compile code. I also found out about Just in time compilation As far as execution speed is concerned the various benchmarks cannot be perfect and depend on context and the task which is being performed. Please tell if I am wrong in my interpretations.

    Read the article

  • F# for the C# Programmer

    - by mbcrump
    Are you a C# Programmer and can’t make it past a day without seeing or hearing someone mention F#?  Today, I’m going to walk you through your first F# application and give you a brief introduction to the language. Sit back this will only take about 20 minutes. Introduction Microsoft's F# programming language is a functional language for the .NET framework that was originally developed at Microsoft Research Cambridge by Don Syme. In October 2007, the senior vice president of the developer division at Microsoft announced that F# was being officially productized to become a fully supported .NET language and professional developers were hired to create a team of around ten people to build the product version. In September 2008, Microsoft released the first Community Technology Preview (CTP), an official beta release, of the F# distribution . In December 2008, Microsoft announced that the success of this CTP had encouraged them to escalate F# and it is now will now be shipped as one of the core languages in Visual Studio 2010 , alongside C++, C# 4.0 and VB. The F# programming language incorporates many state-of-the-art features from programming language research and ossifies them in an industrial strength implementation that promises to revolutionize interactive, parallel and concurrent programming. Advantages of F# F# is the world's first language to combine all of the following features: Type inference: types are inferred by the compiler and generic definitions are created automatically. Algebraic data types: a succinct way to represent trees. Pattern matching: a comprehensible and efficient way to dissect data structures. Active patterns: pattern matching over foreign data structures. Interactive sessions: as easy to use as Python and Mathematica. High performance JIT compilation to native code: as fast as C#. Rich data structures: lists and arrays built into the language with syntactic support. Functional programming: first-class functions and tail calls. Expressive static type system: finds bugs during compilation and provides machine-verified documentation. Sequence expressions: interrogate huge data sets efficiently. Asynchronous workflows: syntactic support for monadic style concurrent programming with cancellations. Industrial-strength IDE support: multithreaded debugging, and graphical throwback of inferred types and documentation. Commerce friendly design and a viable commercial market. Lets try a short program in C# then F# to understand the differences. Using C#: Create a variable and output the value to the console window: Sample Program. using System;   namespace ConsoleApplication9 {     class Program     {         static void Main(string[] args)         {             var a = 2;             Console.WriteLine(a);             Console.ReadLine();         }     } } A breeze right? 14 Lines of code. We could have condensed it a bit by removing the “using” statment and tossing the namespace. But this is the typical C# program. Using F#: Create a variable and output the value to the console window: To start, open Visual Studio 2010 or Visual Studio 2008. Note: If using VS2008, then please download the SDK first before getting started. If you are using VS2010 then you are already setup and ready to go. So, click File-> New Project –> Other Languages –> Visual F# –> Windows –> F# Application. You will get the screen below. Go ahead and enter a name and click OK. Now, you will notice that the Solution Explorer contains the following: Double click the Program.fs and enter the following information. Hit F5 and it should run successfully. Sample Program. open System let a = 2        Console.WriteLine a As Shown below: Hmm, what? F# did the same thing in 3 lines of code. Show me the interactive evaluation that I keep hearing about. The F# development environment for Visual Studio 2010 provides two different modes of execution for F# code: Batch compilation to a .NET executable or DLL. (This was accomplished above). Interactive evaluation. (Demo is below) The interactive session provides a > prompt, requires a double semicolon ;; identifier at the end of a code snippet to force evaluation, and returns the names (if any) and types of resulting definitions and values. To access the F# prompt, in VS2010 Goto View –> Other Window then F# Interactive. Once you have the interactive window type in the following expression: 2+3;; as shown in the screenshot below: I hope this guide helps you get started with the language, please check out the following books for further information. F# Books for further reading   Foundations of F# Author: Robert Pickering An introduction to functional programming with F#. Including many samples, this book walks through the features of the F# language and libraries, and covers many of the .NET Framework features which can be leveraged with F#.       Functional Programming for the Real World: With Examples in F# and C# Authors: Tomas Petricek and Jon Skeet An introduction to functional programming for existing C# developers written by Tomas Petricek and Jon Skeet. This book explains the core principles using both C# and F#, shows how to use functional ideas when designing .NET applications and presents practical examples such as design of domain specific language, development of multi-core applications and programming of reactive applications.

    Read the article

  • Are you a good or bad programmer?

    - by Eli
    Hi All, I see a lot of questions on SO that are asked about 'good' programmers vs 'bad' programmers. For example, what is a good/bad programmer, how to tell a good/bad programmer, what to do about a bad programmer on a team, how to hire a good programmer. I know it's pretty easy to apply the words to other people, but I find myself wondering if anyone out there would actually define THEMSELVES in a Boolean fashion like this, rather than "good in some areas, weak in others..." I'm not asking as an either/or where you have to be one or the other, but as a 'both' - are you a good or bad programmer? If so (either one), why? Please note this isn't meant to be argumentative, or to define good/bad practices, etc. I just want to know how many people think they are good, bad, or neither out there.

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >