Search Results

Search found 19966 results on 799 pages for 'wild thing'.

Page 6/799 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Is there such a thing as a super programmer? [closed]

    - by Muhammad Alkarouri
    Have you come across a super programmer? What identifies him or her as such, compared to "normal" experienced/great programmers? Also. how do you deal with a person in your team who believes he is a super programmer? Both in case he actually is or if he isn't? Edit: Interesting inputs all round, thanks. A few things can be gleaned: A few definitions emerged. Disregarding too localised definitions (that identified the authors or their acquaintance as super programmers), I liked a couple definitions: Thorbjørn's definition: a person who does the equivalent of a good team consistently for a long time. Free Electron, linked from Henry's answer. A very productive person, of exceptional abilities. The explanation is a good read. A Free Electron can do anything when it comes to code. They can write a complete application from scratch, learn a language in a weekend, and, most importantly, they can dive into a tremendous pile of spaghetti code, make sense of it, and actually getting it working. You can build an entire businesses around a Free Electron. They’re that good. Contrasting with the last definition, is the point linked to by James about the myth of the genius programmer (video). The same idea is expressed as egoless programming in rwong's comment. They present opposite opinions as whether to optimise for such a unique programmer or for a team. These definitions are definitely different, so I would appreciate it if you have an input as to which is better. Or add your own if you want of course, though it would help to say why it is different from those.

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Grid Style. Is It The Next Big Thing In Web Design?

    Inspired by typographic magazine layout grids more and more web designers are starting to embrace grid-based design siting cleaner and more easily digestible web pages as a benefit. The concept of ... [Author: Michiel Van Kets - Web Design and Development - June 17, 2010]

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • What happens after `ubuntu-bug` has done it's thing?

    - by guntbert
    Until some time ago you ran apport-bug or ubuntu-bug to start reporting a bug. The system would then open launchpad with your account, upload the collected information and let you add more info to the bug report. Now when I run gksudo ubuntu-bug (for instance with a crash-file as argument) the common bug dialog appears and thats all. Where is the report being sent to? Definitely not to launchpad as a bug report (although in the concrete situation people have been able to file a report about that bug). So: where is this report sent to (and just maybe) how can I still file a bug report from my system (makes uploading of the pertaining files much easier) Could it be that "the system" decided that this would be a duplicate of an already existing bug?

    Read the article

  • Why do the Escape and Enter keys not always do the right thing in dialog boxes?

    - by Michael Goldshteyn
    Why is it that when a dialog pops up, the Escape key doesn't always cancel it and the Enter key doesn't always press the default button? Shouldn't this be a standard across all dialog boxes in all applications? I have gotten into the habit of pressing Escape to cancel a dialog and Enter to confirm it, but applications (and especially KDE, GNOME and Unity in many many cases) seem to ignore my wishes. What is the problem? Is consitency too much to ask?

    Read the article

  • Is there such a thing as having too many private functions/methods?

    - by shovonr
    I understand the importance of well documented code. But I also understand the importance of self-documenting code. The easier it is to visually read a particular function, the faster we can move on during software maintenance. With that said, I like to separate big functions into other smaller ones. But I do so to a point where a class can have upwards of five of them just to serve one public method. Now multiply five private methods by five public ones, and you get around twenty-five hidden methods that are probably going to be called only once by those public ones. Sure, it's now easier to read those public methods, but I can't help but think that having too many functions is bad practice.

    Read the article

  • Software solution from the 2000's, should I attempt to patch or remake the whole thing?

    - by ShadowScripter
    I was sent out to discuss a system that a certain company is currently using and what should be done with it. The company manufactures various carton displays. This system was developed to keep track of clients, orders and prices. Lots have happened since the system was created and the system is now, as the manager described it, "locked up" and "problematic", which I translate as "not dynamic" and "unstable". Some info about the system It was developed around the year 2000 Fairly small system, 2-5 users, 6 forms, ~8 tables with average quantities of data Built on early Visual Basic, forms created with the drag and drop design. Interface is basically just a window with a menu and some forms Uses MSSQL database (SQL2005 server) to store data and ODBC driver to query, data was migrated from excel before this system, and before excel it was handled, calculated and written by hand and paper Users work in Microsoft XP environment (and up) Their main problem is that they can't adjust and calculate prices, can't add new carton types etc, correctly anymore because they can't (or rather, they don't know how to) touch the data on the server. I suggested 3 possible solutions Attempt to patch the current system Create a fresh new interface (preferably similar environment, VB.net or VB based) Bring it back to an Excel solution, considering it is such a small system There might be more options, but these are the ones I could think of. My questions are What should I recommend and why? What is or could be the pros and cons of these alternatives? Are there other (possibly better) alternatives?

    Read the article

  • What's the closest thing to Apple's SpriteKit on Android devices? [on hold]

    - by Krumelur
    I've been playing around with the iOS 7 SpriteKit APIs and I totally love them. As I'm pretty much a n00b on Android, I'm wondering what the best Alternative would be if I wanted to go cross platform? I find Cocos2D learning curve pretty steep, where with SpriteKit it's a matter of minutes to get something on the screen. Then there's MonoGame and Cocos 2D for MonoGame - haven't tried either one I must admit.

    Read the article

  • Is there such a thing these days as programming in the small?

    - by WeNeedAnswers
    With all the programming languages that are out there, what exactly does it mean to program in the small and is it still possible, without the possibility of re-purposing to the large. The original article which mentions in the small was dated to 1975 and referred to scripting languages (as glue languages). Maybe I am missing the point, but any language that you can built components of code out of, I would regard to being able to handle "in the large". Is there a confusion on what Objects are and do they really figure as being mandatory to being able to handle "the large". Many have argued that this is the true meaning of "In the large" and that the concepts of objects are best fit for the job.

    Read the article

  • What is this thing called?!?! (bottom of screen popup)

    - by NRGdallas
    I am looking for the name and some possible jquery libraries etc for the standard bottom of screen popup bar. Its like a little bar that pops out on the bottom of the screen after X seconds or however really - generally slides up, about 50px or so high, and usually the length of the main container. used for some form of coupon advertisement or various promotion text etc. What is the proper term for this item, and would there be any good references to best-use guidelines?

    Read the article

  • Evaluating this huge .... thing? [closed]

    - by Shark
    I'll just leave this here: superTiled = ((((gctUINT32) (hardware->chipMinorFeatures)) >> (0 ? 12:12) & ((gctUINT32) ((((1 ? 12:12) - (0 ? 12:12) + 1) == 32) ? ~0 : (~(~0 << ((1 ? 12:12) - (0 ? 12:12) + 1)))))) == (0x1 & ((gctUINT32) ((((1 ? 12:12) - (0 ? 12:12) + 1) == 32) ? ~0 : (~(~0 << ((1 ? 12:12) - (0 ? 12:12) + 1))))))); If you can break it down into 'steps' or a single numeric value - props to you :) I see that this is generated/ unwrapped macros code but .... can't really wrap around it atm. The first person that breaks this down / simplifies it a bit gets to know which driver i found it in. Cheers :)

    Read the article

  • C++ pimpl idiom wastes an instruction vs. C style?

    - by Rob
    (Yes, I know that one machine instruction usually doesn't matter. I'm asking this question because I want to understand the pimpl idiom, and use it in the best possible way; and because sometimes I do care about one machine instruction.) In the sample code below, there are two classes, Thing and OtherThing. Users would include "thing.hh". Thing uses the pimpl idiom to hide it's implementation. OtherThing uses a C style – non-member functions that return and take pointers. This style produces slightly better machine code. I'm wondering: is there a way to use C++ style – ie, make the functions into member functions – and yet still save the machine instruction. I like this style because it doesn't pollute the namespace outside the class. Note: I'm only looking at calling member functions (in this case, calc). I'm not looking at object allocation. Below are the files, commands, and the machine code, on my Mac. thing.hh: class ThingImpl; class Thing { ThingImpl *impl; public: Thing(); int calc(); }; class OtherThing; OtherThing *make_other(); int calc(OtherThing *); thing.cc: #include "thing.hh" struct ThingImpl { int x; }; Thing::Thing() { impl = new ThingImpl; impl->x = 5; } int Thing::calc() { return impl->x + 1; } struct OtherThing { int x; }; OtherThing *make_other() { OtherThing *t = new OtherThing; t->x = 5; } int calc(OtherThing *t) { return t->x + 1; } main.cc (just to test the code actually works...) #include "thing.hh" #include <cstdio> int main() { Thing *t = new Thing; printf("calc: %d\n", t->calc()); OtherThing *t2 = make_other(); printf("calc: %d\n", calc(t2)); } Makefile: all: main thing.o : thing.cc thing.hh g++ -fomit-frame-pointer -O2 -c thing.cc main.o : main.cc thing.hh g++ -fomit-frame-pointer -O2 -c main.cc main: main.o thing.o g++ -O2 -o $@ $^ clean: rm *.o rm main Run make and then look at the machine code. On the mac I use otool -tv thing.o | c++filt. On linux I think it's objdump -d thing.o. Here is the relevant output: Thing::calc(): 0000000000000000 movq (%rdi),%rax 0000000000000003 movl (%rax),%eax 0000000000000005 incl %eax 0000000000000007 ret calc(OtherThing*): 0000000000000010 movl (%rdi),%eax 0000000000000012 incl %eax 0000000000000014 ret Notice the extra instruction because of the pointer indirection. The first function looks up two fields (impl, then x), while the second only needs to get x. What can be done?

    Read the article

  • Taking "do the simplest thing that could possible work" too far in TDD: testing for a file-name kno

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything nor throw anything processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • How strict should I be in the "do the simplest thing that could possible work" while doing TDD

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything... processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • Is there such a thing as a dektop application event aggregator, similar to that used in Prism?

    - by brownj
    The event aggregator in Prism is great, and allows loosely coupled communication between modules within a composite application. Does such a thing exist that allows the same thing to happen between standalone applications running on a user's desktop? I could imagine developing a solution that uses WCF with TCP binding and running inside Windows Process Activation Service. Client applications could subscribe or publish events to this service as required and it would ensure all other listeners get notified of events as appropriate. Using TCP would enable event messages to be pushed out to clients without the need for polling, ensuring messages are delivered very quickly. I can't help but think though that such a thing would already exist... Is anyone aware of something like this, or have any advice on how it may be best implemented?

    Read the article

  • Is IE9 a modern browser?

    - by anirudha
    Is IE9 a modern browser? i show you a post who compare IE as well as they compare chrome. Is IE9 a modern browser? well Are you thing that they make something bettter then thing another that what Firefox and chrome do whenever Microsoft do this. that's point that Microsoft always blast IE because they make version upon version not upon update like Firefox make 3.6.14 after 3.6.13 and chrome give update soon as possible but IE not come soon. they will thing for making 10 instead of giving update on 9. well what they tell us new. they make fool public everytime i believe they make fool public as same as today they maked for version 9 they show developer tool in 9 have three new tabs or pael are this enough. whenever  in chrome and firefox their is many plugin who make development easier IE still have a developer tool who not have enough power like Firebug in Firefox. they show performance but forget luna user [window xp] and their IE never runs on other plateform but chrome and firefox can. no customization in IE whenever chrome and firefox have uncountable plugin and addons. show features now in IE who already implemented in firefox very early. well their is no rule that static goes right every time not sure that IE and Firefox both are right in their language. their is no one predict what thing goes better in future. so well keep a thing in mind never wasted time and also thing to make task easier even you need to use sollution opensource or closesoure inside or outside MS does not matter. well everyone tell you much more then they do even IE and some other. they never tell you this thing not in IE but in another can be found they never tell you use other whenever you need a thing and never can be found in their software. you need to more beware of IE because they make them commorcial not really for public if really then why they stop wxp user to use them as well firefox and chrome never force. because they need a thing that force more then more copy of windows sale. so they thing to add a thing in window 8. the IE9 they thing to make before that they thing to make  this for windows 8. they always force user to purchase this for this. this for this. and this trick sell their software. well outside MS Mozilla and Chrome all behave better with user and their feedback. they respect user their privacy and feedback. if you not believe that how much problem you found in IE and they got solved soon as in chrome and firefox bug kill soon. because IE not opensource we need to  boycutt them secondly their is no customization then they make user task easier even with twitter and facebook.

    Read the article

  • Distributed computing for a company? Is there such a 'free' thing?

    - by Jakub
    I am new to the whole distributed computing / cloud thing. But I had an idea at work for our multimedia stuff like movie encoding / cpu intensive things tasks (which sometimes take a few hours). Is there a 'free' (linux?) way to go about using a Windows machine, and offsetting those cpu cycles for that task to say 10 servers that are generally idle (cpu wise)? I'm just curious if there is a way to do this or am I just grasping at straws here. My thought is that a 'cloud' setup would achieve this, however like I stated initially, I am a total newbie when it comes to it. This is just an idea, looking for some thoughts? Anyone achieve this?

    Read the article

  • Is there such thing as a portable database that runs easily OsX and Windows?

    - by Jean-Philippe Murray
    I need to make a small database for a project at school (not computer related at all, I'm indexing and categorizing paper documents of a research projet). The thing is that in september, my semester is over and other students will have to taker over the project (and so on, for every semester!), so I'd need something that would be free and OS agnostic (or at least OsX/Windows) so it would easily be given to the next students on the project. I was thinking about a WAMP running USB key that would have a MySQL / HTML interface, but it will become locked to the OS I choose first. LibreOffice and the likes will be an option in the end if I don't find anything truly portable. Anyone has a solution in mind?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >