Search Results

Search found 15103 results on 605 pages for 'programmers notepad'.

Page 49/605 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Will HTML5/JS Eventually Replace All Client Side Languages? [closed]

    - by Shnitzel
    I'm just wondering about the future of it all. IMHO, there are 4 forces that define where technology goes: Microsoft, Apple, Google, Adobe. It looks like in Apple's iPhone/iPad iADs can now be programmed in HTML5. So does that mean HTML5 will eventually replace objective-c? Also, Microsoft has now shifted it's focus from WPF/Silverlight to HTML5 and I assume Visual Studio 2011 will be all about tooling support for HTML5. Because that's what Microsoft do. (Tools). In a few months IE9 the last major browser will support HTML5. Similarly Adobe is getting on the HTML5 bandwagon and allows to export flash content to HTML5 in their latest tools. And we all know how much in bed Google is with html5. Heck, their latest Operating System (Chrome OS) is nothing but a big fat web browser. Apps for Mobile (i.e., iPhone, Android, WM7) are very hard for a company to program especially for many different devices (each with their own language) so I'm assuming this won't last too long. I.e., HTML5 will be the unifying language. Which is somewhat sad for app developers because now users will be able to play the "cool" html5 apps for free on the web and it'll be hard to charge for them. So are strongly-typed languages really doomed, and in the future, say 5-10 years, will client side programming only be in HTML5? Will all of us become javascript programmers? :) Because the signs are sure pointing that way...

    Read the article

  • Guidance in naming awkward objects?

    - by GlenH7
    I'm modeling a chemical system, and I'm having problems with naming my objects within an enum. I'm not sure if I should use: the atomic formula the chemical name an abbreviated chemical name. For example, sulfuric acid is H2SO4 and hydrochloric acid is HCl. With those two, I would probably just use the atomic formula as they are reasonably common. However, I have others like sodium hexafluorosilicate which is Na2SiF6. In that example, the atomic formula isn't as obvious (to me) but the chemical name is hideously long: myEnum.SodiumHexaFluoroSilicate. I'm not sure how I would be able to safely come up with an abbreviated chemical name that would have a consistent naming pattern. From a maintenance point of view, which of the options would you prefer to see and why? Audience for the code will be just programmers, not chemists. If that guides the particulars: I'm using C#; I'm starting with 10 - 20 compounds and would have at most 100 compounds. The enum is to facilitate common calculations - the equation is the same for all compounds but you insert a property of the compound to complete the equation.

    Read the article

  • Advice for Setting up an On-Call Team

    - by Ciaran Archer
    I'm leading a largish development team (~35 developers). We are doing primarily Web Development work on a number of sites. Historically the knowledge on the teams has been pretty siloed. If you worked on Site A you will know how to troubleshoot it, but you would not be a lot of help on Site B. We also have a few cross-cutting concerns, i.e. common components used between sites which require specialized knowledge to troubleshoot. With all this in mind, I'm trying to understand the best way to setup an on-call team. This would be a team of programmers who would be available to deal with out-of-hours emergency issues occasionally (say one call every 2 weeks). They may be required to deploy emergency fixes. Part of me is saying we can't have a big on-call team with shallow knowledge, instead we need a smaller team with deep knowledge who can expect to be on-call more often and remunerated as such. Does anyone have any suggestions based on experience on how to setup this team? Thanks in advance.

    Read the article

  • Guidance in naming awkward domain-specific objects?

    - by GlenH7
    I'm modeling a chemical system, and I'm having problems with naming my objects within an enum. I'm not sure if I should use: the atomic formula the chemical name an abbreviated chemical name. For example, sulfuric acid is H2SO4 and hydrochloric acid is HCl. With those two, I would probably just use the atomic formula as they are reasonably common. However, I have others like sodium hexafluorosilicate which is Na2SiF6. In that example, the atomic formula isn't as obvious (to me) but the chemical name is hideously long: myEnum.SodiumHexaFluoroSilicate. I'm not sure how I would be able to safely come up with an abbreviated chemical name that would have a consistent naming pattern. From a maintenance point of view, which of the options would you prefer to see and why? Some details from comments on this question: Audience for the code will be just programmers, not chemists. I'm using C#, but I think this question is more interesting when ignoring the implementation language I'm starting with 10 - 20 compounds and would have at most 100 compounds. The enum is to facilitate common calculations - the equation is the same for all compounds but you insert a property of the compound to complete the equation. For example, Molar mass (in g/mol) is used when calculating the number of moles from a mass (in grams) of the compound. Another example of a common calculation is the Ideal Gas Law and its use of the Specific Gas Constant

    Read the article

  • Am I missing a pattern?

    - by Ryan Pedersen
    I have a class that is a singleton and off of the singleton are properties that hold the instances of all the performance counters in my application. public interface IPerformanceCounters { IPerformanceCounter AccountServiceCallRate { get; } IPerformanceCounter AccountServiceCallDuration { get; } Above is an incomplete snippet of the interface for the class "PerformanceCounters" that is the singleton. I really don't like the plural part of the name and thought about changing it to "PerformanceCounterCollection" but stopped because it really isn't a collection. I also thought about "PerformanceCounterFactory" but it is really a factory either. After failing with these two names and a couple more that aren't worth mentioning I thought that I might be missing a pattern. Is there a name that make sense or a change that I could make towards a standardized pattern that would help me put some polish on this object and get rid of the plural name? I understand that I might be splitting hairs here but that is why I thought that the "Programmers" exchange was the place for this kind of thing. If it is not... I am sorry and I will not make that mistake again. Thanks!

    Read the article

  • How to keep word document, html and pdf documentation aligned

    - by dendini
    Is there a way to write documentation in a WYSIWYG editor which can then export into HTML, WORD and PDF and keep copies synchronized? This documentation are mostly technical notes and some contextual help for some softwares so they must contain images and some styling, they are not programmer's documentation (API list or functions list) for which probably a program like Javadoc or Doxygen would be the best choice. For example how do companies with hundreds different software lines and thousands of programmers deal with this? I have several solutions but they all seem lacking in some aspect: Latex/Tex : very good pdf and html export, not very user friendly and no full-blown WYSIWYG editor available. LibreOffice/OpenOffice : full blown WYSIWYG editor however html export not so good (need to edit manually exported html which needs to be maintained separately ) Mediawiki or any other wiki : could be keeping documentation in wikitext format, so html is automatically generated, pdf exportation is quite good with many available plugins. Again however need some formation for the staff to use it and need to setup a server for this. Notice I'm not asking for software A vs software B, I'm asking for general advice, big companies procedures for documentation and yes some software product names if available.

    Read the article

  • Developing a Configurable Pricing Program

    - by Ben DeMott
    The organization I work at has some interesting requirements when it comes to pricing for online commerce. Currently the developers write different 'pricing rules' and those rules can be applied to our items based on attributes of the items. For Example: INPUTS: [cost, sug_retail, discontinued, warehouse_qty, orderable_qty, brand, type, days_available, shipping_rate, weight, map_protected, map_discount] MATCH: brand=x, warehouse_qty 1, discontinued=True, map_protected=False SET: retail_price = (sug_retail * 0.95), offer_price1 = (cost * 1.25 + shipping_rate) I am looking to allow the merchandising team to have more control over the pricing and formulas - they are afterall technical enough to write excel formulas. I've been looking at writing a desktop application that uses something like numexpr http://code.google.com/p/numexpr/ or http://sympy.org/en/index.html to allow non-programmers to integrate their own logic into our pricing backend. We have multiple price-tiers we have to set, for multiple markets, so an elegant solution is needed. It's getting frustrating for the dev team to continually tweak/manage all of the pricing rules (we sell over 200 brands in 3 markets). My question is; does this seem like a decent approach? Can you think of a better way to parse string-mathematical-grammer? Can you think of a different way for users to provide formula's to integrate into a automated pricing system? Does anyone know of any examples of existing applications that do this? Excel, and Access are out of the question - the volume of data we manipulate has already proven the need to automate it - now we just need some visibility into that automation.

    Read the article

  • Intelligence as a vector quantity

    - by Senthil Kumaran
    I am reading this wonderful book called "Coders at Work: Reflections on the Craft of Programming" by Peter Seibel and I am at part wherein the conversation is with Joshua Bloch and I found this answer which is an important point for a programmer. The paragraph, goes something like this. There's this problem, which is, programming is so much of an intellectual meritocracy and often these people are the smartest people in the organization; therefore they figure they should be allowed to make all the decisions. But merely the fact they are the smartest people in the organization does not mean that they should be making all the decisions, because intelligence is not a scalar quantity; it's a vector quantity. Here at the last sentence, I fail to get the insight which is he trying to share. Can someone explain it in a little further as what he means by a vector quantity, possibly trying to present the same insight. Further down, I get the point that he is not taking about having an organization where non-technical people (sometimes clueless) can be managers of the technical people for some reason that they can spend more time to write emails well, because the very next statement following the above paragraph was. And if you lack empathy or emotional intelligence, then you shouldn't be designing APIs or GUIs or languages. I understand that he is saying that in Software engineering, programmers should know how the users will see their product and design for them. I felt the above paragraph was very interesting.

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Math questions at a programmer interview?

    - by anon
    So I went to an interview at Samsung here in Dallas, Texas. The way the recruiter described the job, he didn't make it sound like it was too math-oriented. The job basically involved graphics programming and C++. Yes, math is implied in graphics programming, especially shaders, but I still wasn't expecting this... The whole interview lasted about an hour and a half and they asked me nothing but math-related questions. They didn't ask me a single programming question, which I found odd. About all they did was ask me how to write certain math routines as a C++ function, but that's about it. What about programming philosophy questions? Design patterns? Code-correctness? Constness? Exception safety? Thread safety? There are a zillion topics that they could have covered. But they didn't. The main concern I have is that they didn't ask any programming questions. This basically implies to me that any programmer who is good at math can get a job here, but they might put out terrible code. Of course, I think I bombed the interview because I haven't used any sort of linear algebra in about a year and I forget math easily if I haven't used it in practice for a while. Are any of my other fellow programmers out there this way? I'm a game programmer too, so this seems especially odd. The more I learn, the more old knowledge that gets "popped" out of my "stack" (memory). My question is: Does this interview seem suspicious? Is this a typical interview that large corporations have? During the interview they told me that Google's interview process is similar. They have multiple, consecutive interviews where the math problems get more advanced.

    Read the article

  • Naming a class that decides to retrieve things from cache or a service + architecture evaluation

    - by Thomas Stock
    Hi, I'm a junior developer and I'm working on a pet project that I want to learn as much as possible from. I have the following scenario: There's a WCF service that I use to retrieve and update data, lets say Cars. So it's called CarWCFService and has a GetCars(), SaveCar(), ... . It implements interface ICarService. This isn't the Actual WCF service but more like a wrapper around it. Upon retrieving data from the service, I want to store them in local memory, as cache. I have made a class for this called CarCacheService which also implements interface ICarService. (I will explain later why it implements ICarService) I don't want client code to be calling these implementations. Instead, I want to create a third implementation for ICarService that tries to read from the CarCacheService before calling the WCFCarService, stores retrieved data in the CarCacheService, etc. 3 questions: How do I name this third class? I was thinking about something as simple as CarService. This does not really says what the service does exactly, tho. Is the naming for the other classes good? Would this naming and architecture be obvious for future programmers? This is my biggest concern. Does this architecture make sense? The reason that I implement ICarService on the CarCacheService is mainly because it allows me to fake the WCFService while debugging. I can store dummy data in a CarCacheService instance and pass it to the CarService, together with an(other) empty CarCacheService. If I made CacheCarService and WCFService public I could let client code decide if they want to drop the caching and just work directly on the WCFService.

    Read the article

  • Collocation in Code

    - by Dan McGrath
    Quite some time ago I remember reading an article from 'Joel on Software' that mentioned collocation of information in code was important. By collocation, I mean that relevant information about the code is present when the code is. I'm currently writing an article that has a small bit in it about collocation so I went searching for sources and found the quote in the article 'Making Wrong Code Look Wrong' In order to make code really, really robust, when you code-review it, you need to have coding conventions that allow collocation. In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says For me, collocation isn't just about the code itself, but the tool used to view the code. If it can help with the 'collocation factor' (term coined by me?) I believe it can help with the programmers productivity. Take for example the modern IDEs that show you the variables type by hovering over it. Are their any other articles written about collocation in code and/or are their other terms that this is known by?

    Read the article

  • What is the correct UI interface to learn for creating Windows phone 8 apps? [closed]

    - by Robert Oschler
    I am a veteran Delphi 6 programmer transitioning to C# development. My first project is a open source library that will have a minimal user interface since it is meant to be used as a Component primarily on desktop PCs running Visual Studio. My next project is going to be a Windows 8 phone app and I intend for that platform to be my primary focus for future C# development, not the desktop. My concern is that I waste as little time as possible learning a presentation framework that will benefit or distract me from writing Windows 8 phone apps. The plethora of framework names I have already encountered include, WinForms, WPF (Windows Presentation Framework), Silverlight, Silverlight Mobile, Metro and there may be others. Given my goal outlined in the first paragraph above, I have a few questions: 1) Which of the frameworks should I use for the small amount of UI work I will do with the desktop Component project that will help me the most, or hurt me the least, when I move to Windows 8 phone app development? 2) Which is the correct framework to study for developing Windows 8 phone apps? 3) Any awesome tutorials, resources or books you have run into targeted towards veteran programmers from other platforms? I read about the Portable Library Tools on this Stack Overflow thread: http://stackoverflow.com/questions/5522355/windows-phone-7-wpf-sharing-a-codebase But the reply by Simon Guindon seemed to indicate to me that it's not the best solution for writing a competitive Windows 8 phone app.

    Read the article

  • How to refactor while keeping accuracy and redundancy?

    - by jluzwick
    Before I ask this question I will preface it with our environment. My team consists of 3 programmers working on multiple different projects. Due to this, our use of testing is mostly limited to very general black box testing. Take the following assumptions also: Unit Tests will eventually be written but I'm under strict orders to refactor first Ignore common Test-Driven Development techniques when given this environment, my time is limited. I understand that if this were done correctly, our team would actually save money in the long-term by building Unit-Tests before hand. I'm about to refactor a fairly large portion of the code that is critical. While I believe my code will accurately work when done and after our black box testing, I realize that there will be new data that the new code might not be able to handle. What I wanted to know is how to keep old code that functions 98% of the time so that we can call those subroutines in case the new code doesn't work properly. Right now I'm thinking of separating the old code in a separate class file and adding a variable to our config that will tell the program which code to use. Is there a better way to handle this? NOTE: We do use revision control and we have archived builds so the client could always revert to a previous build, but I would like to see if there is a decent way of doing this besides reverting. I want this so they can use the other new functionality delivered in the new build. Edit: While I agree I will need to write Unit Tests for this, I don't believe I will capture everything with them. I'm looking for ways to easily be able to revert to the old, functional code should anything happen. While I know this is a poor practice, I'm planning on removing this code after our team can guarantee that the new code works to the same standards as the old.

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • Low hanging fruit where "a sufficiently smart compiler" is needed to get us back to Moore's Law?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for?

    Read the article

  • 2 year degree plus experience vs 4 year degree

    - by CenterOrbit
    Alright, I have searched around a bit on this site and found two somewhat similar questions: Computer Science Programming Certificate vs. Computer Science Degree? Is it possible/likely to be paid fairly without a college degree? But these do not provide an answer specifically to what I am seeking. I have my 2 year A.A.S. Degree in computer programming, along with a networking certificate from a technical college. I also have been working at a small educational game development company for 3 years now in various positions, but steadily moving up and now as a lead programmer on a few projects. Some of the higher programmers I work with claim that no matter how much experience I develop it still will not mean as much as someone with a 4 year degree. Their argument is that most employers will look over my resume because of the common '4 yr' minimum requirement. I have also heard people state (not as many though) that experience is everything and that an employer would rather have someone that has worked in the field instead of a rookie fresh out of college. I have heard both sides of this argument, but am looking for a general consensus, or more arguments from both sides from the people who have been there, or are there.

    Read the article

  • So are we ever getting the technological singularity

    - by jsoldi
    I´m still waiting for an AI robot that will pass the Turing test. I keep going back to http://www.a-i.com/ and nothing. I don´t know much about AI but, did anyone ever tried to make a genetic algorithm whose evolution algorithm itself evolves? Or how about one whose algorithm that makes the genetic algorithm evolve, evolves? Or one whose genetic algorithm that makes the genetic algorithm that makes the genetic algorithm evolve, evolves? Or how about an algorithm that abstracts all this into a potentially infinitely deep tree of genetic evolution algorithms? Aren´t we just failing as programmers? And I don´t think we can blame the processors speed. If you make and application that simulates consciousness you will get a Nobel prize no matter how many hours it takes to respond to your questions. But nobody did it. It almost reminds me to Randi´s $1000000 paranormal challenge. As I keep going back to AI chat bots, they keep getting better at changing the subject on a way that seems natural. But if I tell them something like "if 'x' is 2 then whats two times 'x'?" then they don't have a clue what I'm talking about. And I don't think they need a whole human brain simulation to be able to answer to something like that. They don't need feelings or perception. This is just language and logics. I don't think my perception of the color red gives me the ability to understand that if 'x' is 2 then two times 'x' is 4. I'm sure we are just missing some elemental principle we cannot grasp because it's probably stuck behind our eyes. What do you think?

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • I've been hired on as a entry-level game developer at a company and have little/no experience in API programming, what should I expect?

    - by Mr. Geneth
    So, I've been hired on as an entry level game developer with little/no experience working with any API other than Win32. This will be an overall learning experience for me as a person and I have gone over this multiple times with the boss and he has no problem with my inexperience. He says that if I'm not worth it now, I will be later. This gives me confidence, but I still feel that I should know a lot more before tackling this position. I would be stupid to pass it up. This is one of my favorite places to come for advice and help and have tried to just accept this, but it just keeps bothering that I can't go in knowing how to at least do the basics. I want to give the company its money's worth. Ya know? My questions are: What should I expect from the other programmers in this project (In terms of patience with me and working together, and being taught)? Is this normal? Any other advice on this sort of thing would be wonderful. I just want to feel comfortable with it.

    Read the article

  • New college grad, psychology major, wants to code professionally. Should I get Sun Java-certified?

    - by Anita
    I just graduated from a fairly well-known liberal arts college in May. Interestingly, I majored in psychology, with a concentration in social psychology. In college I took Intro to Computer Science and hated it (used to blame it on myself; now I blame it on the professor :) However, I've always wanted to be a programmer, and finally got my wish by getting hired by a company that was willing to let me learn coding from scratch in exchange for low pay. Well, what do you know, I just got laid off this morning, and need a new job by November to pay the bills. I loved the coding part of my job at the company, and managed to learn enough Java to feel competent in the job and curious to learn more. I think my goal now is to become a professional programmer. I still know very little (never used Swing, for example) but nothing that a good book can't fix. That's the background anyway; sorry for the rambling - I'm still in shock from the layoff :( It seems to me the quickest way to get noticed by companies, without a CS degree, is by getting certification. I'm halfway through studying for the SCJP and can probably sit for an exam in a week or two. Am I right in my assumption that certs will help in my case? And in general, do I have a bat's chance in hell of making it against formally trained programmers? My assets are really just raw intelligence and intense curiosity; well, maybe a love for problem-solving too. Thanks all - feel free to edit/tag the post!

    Read the article

  • Programmer + Drugs =? [closed]

    - by sytycs
    I just read this quote from Steve Jobs: "Doing LSD was one of the two or three most important things I have done in my life." Now I'm wondering: Has there ever been a study where programmers have been given drugs to see if they could produce "better" code? Is there a programming concept, which originated from people who where drug-users? Do you know of a piece of code, which was written by someone under the influence? EDIT So I did a little more research and it turns out Dennis R. Wier actually documented how he took LSD to wrap his head around a coding project: "At one point in the project I could not get an overall viewpoint for the operation of the entire system. It really was too much for my brain to keep all the subtle aspects and processing nuances clear so I could get a processing and design overview. After struggling with this problem for a few weeks, I decided to use a little acid to see if it would enable a breakthrough, because otherwise, I would not be able to complete the project and be certain of a consistent overall design"[1] There is also an interesting article on wired about Kevin Herbet, who used LSD to solve tough technical problems and chemist Kary Mullis even said "...that LSD had helped him develop the polymerase chain reaction that helps amplify specific DNA sequences." [2]

    Read the article

  • How to advertise (free) software?

    - by nebukadnezzar
    I'm not sure if this fits on SO, but other SE sites don't seem to fit either, so I understand when this question gets moved, Although I'd like to avoid getting it closed due to being offtopics, since I think that this question might fit, considering this part of the FAQ: Stack Overflow is for professional and enthusiast programmers, ... covers … a specific programming problem ... matters that are unique to the programming profession Sorry for the lengthy Introduction, though. When Software is advertised, it is usually Software for one (or more) specific purpose, such as: Mozilla Firefox - A Web Browser Ubuntu - An Operating System Python - A Programming Language Visual Studio - A Development Studio ... And so on. But when writing Libraries, that is, Software that doesn't necessarily serve one specific purpose, but instead multiple purposes, which are usually supposed to be used inside an application, such as: Irrlicht - A 3D Engine Qt - An Application Framework ... The process of advertisement gets a little more difficult. I'm a developer of the latter kind of Software, and I naturally want to advertise my Software. It's not commercial Software; It's not GPL either. It's completely free (Licensed under the MIT License :-)). I naturally host my stuff at github, which technically makes it very easy to access the software, and I thought that these might be possible options, although I have no experience with them: Submit the Software to Freshmeat, and hope for the best Submit the Software to Sourceforge, and hope someone accidently stumbles over it Write spammails, and get death threats via Mail ... But something tells me that these methods are probably not the best Methods. So, my final question would be, How does the Average Joe Hobby Programmer advertise his/her Software Library?

    Read the article

  • What is Java used for these days?

    - by Barry Brown
    Java is fifteen years old. It started life as an alternative to C++ with a comprehensive standard library. Riding on the coattails of the Internet boom, it was popular for writing web applets. Its supposed portability was touted as a way to write desktop apps that would run on any platform. Now it's 2010. Applets are long gone. Desktop apps are giving way to web and mobile apps. Scripting languages are very popular, as is Flash, especially among web-centric developers. People have been chanting "Java's death is near" for several years. Yet a quick job search shows that Java is still a desired skill among programmers. So what is Java used for these days? What kinds of apps are you writing in Java? This should give us an idea of the "state of Java" today. Has the Java tide shifted from Swing desktop apps to Android mobile apps? If you write programs in a JVM language (such as Scala or Groovy), mention it.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >