Search Results

Search found 20445 results on 818 pages for 'history support'.

Page 59/818 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Why doesn't Python require exactly four spaces per indentation level?

    - by knorv
    Whitespace is signification in Python in that code blocks are defined by their indentation. Furthermore, Guido van Rossum recommends using four spaces per indentation level (see PEP 8: Style Guide for Python Code). What was the reasoning behind not requiring exactly four spaces per indentation level as well? Are there any technical reasons? It seems like all the arguments that can be made for making whitespace define code blocks can also be used to argument for setting an exact whitespace length for one indentation level (say four spaces).

    Read the article

  • CSS/JavaScript/hacking: Detect :visited styling on a link *without* checking it directly OR do it fa

    - by Sai Emrys
    This is for research purposes on http://cssfingerprint.com Consider the following code: <style> div.csshistory a { display: none; color: #00ff00;} div.csshistory a:visited { display: inline; color: #ff0000;} </style> <div id="batch" class="csshistory"> <a id="1" href="http://foo.com">anything you want here</a> <a id="2" href="http://bar.com">anything you want here</a> [etc * ~2000] </div> My goal is to detect whether foo has been rendered using the :visited styling. I want to detect whether foo.com is visited without directly looking at $('1').getComputedStyle (or in Internet Explorer, currentStyle), or any other direct method on that element. The purpose of this is to get around a potential browser restriction that would prevent direct inspection of the style of visited links. For instance, maybe you can put a sub-element in the <a> tag, or check the styling of the text directly; etc. Any method that does not directly or indierctly rely on $('1').anything is acceptable. Doing something clever with the child or parent is probably necessary. Note that for the purposes of this point only, the scenario is that the browser will lie to JavaScript about all properties of the <a> element (but not others), and that it will only render color: in :visited. Therefore, methods that rely on e.g. text size or background-image will not meet this requirement. I want to improve the speed of my current scraping methods. The majority of time (at least with the jQuery method in Firefox) is spent on document.body.appendChild(batch), so finding a way to improve that call would probably most effective. See http://cssfingerprint.com/about and http://cssfingerprint.com/results for current speed test results. The methods I am currently using can be seen at http://github.com/saizai/cssfingerprint/blob/master/public/javascripts/history_scrape.js To summarize for tl;dr, they are: set color or display on :visited per above, and check each one directly w/ getComputedStyle put the ID of the link (plus a space) inside the <a> tag, and using jQuery's :visible selector, extract only the visible text (= the visited link IDs) FWIW, I'm a white hat, and I'm doing this in consultation with the EFF and some other fairly well known security researchers. If you contribute a new method or speedup, you'll get thanked at http://cssfingerprint.com/about (if you want to be :-P), and potentially in a future published paper. ETA: The bounty will be rewarded only for suggestions that can, on Firefox, avoid the hypothetical restriction described in point 1 above, or perform at least 10% faster, on any browser for which I have sufficient current data, than my best performing methods listed in the graph at http://cssfingerprint.com/about In case more than one suggestion fits either criterion, the one that does best wins.

    Read the article

  • What happened to Perl?

    - by llasa
    I will try to keep this as objective as possible. I've been dealing with PHP since 3 years know, I have always known of Perl but never really "dived" into it. So I took a look at some Perl code examples and I thought: Wow, It's like PHP just failed at cloning it. My questions are: What is bad about Perl? What are the disadvantages that made it so extremely unpopular so that it is actually dying right know? Why could PHP take over? What does PHP have (or what did it have in the times of PHP4) that made it rise in popularity compared to Perl? I'm rather young and the questions above are a bit subjective and I think you can only really answer them when you have experienced the rise of PHP along with the fall of Perl. Unless my question before I hope that this one here can be more or less completely answered. There have to be definite disadvantages Perl has compared to PHP that made it fall.

    Read the article

  • when was Kase born?

    - by Horace Ho
    First time I saw a class Kase, I was scratching my head. My guess it's something to do with a conflict of the keyboard case. BTW, since when, for which language(S), it becomes a norm?

    Read the article

  • Google GWT cross-browser support: is it BS ?

    - by Tim
    I developed a browser-deployed full-text search app in FlashBuilder which communicates RESTfully with a remote web-server. The software fits into a tiny niche--it is for use with ancient languages not modern ones, and there's no way I'm going to make any money on it but I did spend a lot of time on it. Now that Apple won't allow Flash on the iPad, I'm looking for a 100% javascript solution and was led to consider GWT. It looked promising, but one of the apps being "showcased" as a stellar example of what can be done with GWT has this disclaimer on their website (names {removed} to protect the potentially innocent) : Your current web browser (Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5) is not officially supported by {company and product name were here}. If you experience any problems using this site please install either Microsoft Internet Explorer 6+ or Mozilla Firefox 3.5+ before contacting {product name was here} Support. What gives when GWT apps aren't "officially" supported on Chrome? What grade (A, B, C, D, F) would you give to GWT for cross-browser support? For folks who don't get these kinds of letter grades, A is "excellent" and "F" is failure, and "C" is average. Thanks for your opinions.

    Read the article

  • What ever happened to APL?

    - by lkessler
    When I was at University 30 years ago, I used a programming language called APL. I believe the acronym stood for "A Programming Language", This language was interpretive and was especially useful for array and matrix operations with powerful operators and library functions to help with that. Did you use APL? Is this language still in use anywhere? Is it still available, either commercially or open source? I remember the combinatorics assignment we had. It was complex. It took a week of work for people to program it in PL/1 and those programs ranged from 500 to 1000 lines long. I wrote it in APL in under an hour. I left it at 10 lines for readability, although I should have been a purist and worked another hour to get it into 1 line. The PL/1 programs took 1 or 2 minutes to run on the IBM mainframe and solve the problem. The computer charge was $20. My APL program took 2 hours to run and the charge was $1,500 which was paid for by our Computer Science Department's budget. That's when I realized that a week of my time is worth way more than saving some $'s in someone else's budget. I got an A+ in the course. p.s. Don't miss this presentation entitled: "APL one of the greatest programming languages ever"

    Read the article

  • Who are the most important people in open-source software? [closed]

    - by poseid
    I am reading a book by Malcolm Gladwell on the circumstances of successful careers. The book argues that Bill Gates, Steve Jobs, Bill Joy and more succesful computer pioneers were born between 1950-1955, and did absolve around 10000 hours of practice before microcomputers became widely available in the 1970s and their fairy tale success story begins. As we are in the age of web 2.0 with new forms of databases and persuasive access to information, who are in your opinion the most succesful computer programmers or scientists of our times, when were they born and to which technologies they had access?

    Read the article

  • Are there old versions of Windows UX guidelines somewhere?

    - by Camilo Martin
    Since I've read Windows User Experience Interaction Guidelines (there's a PDF download avaliable) I've found it to be admirably self-deprecating, humbly pointing out their own horrible UI practices long scolded by Joel Spolsky. I'd like to know, however, what they had in mind while they made those mistakes. Is this (terrific) UX Guidelines document something new, or were there previous issues of such? If so, where can I find them? My prayers to Google yielded no leniency.

    Read the article

  • WordPerfect programmers refusing to use anything but assembler

    - by Totophil
    There is a version (popularised by Joel Spolsky) attributing the demise of WordPerfect to a refusal of its programmers to use anything but assembler that led to delay of the first WPwin release and as result eventually to losing the all important battle with Microsoft. There are a few references to programming work being done using assembler in the autobiographical book "Almost Perfect" by W. E. Pete Peterson who used to have a major influence at running the corporation. But these references go back to early 80's when WordPerfect was trying to gain a significant market share by defeating WordStar and not early nineties when the battle with MS took place. I am looking for a second independent source to confirm the assumption. Maybe someone who worked for WordPerfect Corporation at a time, who was close to the company, or had a chance to see the source could clarify the issue. Your help is much appreciated, thanks! Please note that this question is not about any other theories or reasons behind WordPerfect demise. I really just need to clarify whether they used assembler as a primary language for WPwin and (as a bonus really) whether there were discussions held within the corporation about assembler being the right choice. Concisely: Did WPCorp use assembler as a primary language for WPwin? Were discussions held at a time amongst WP Corp staff about assembler being the right choice (was it management or programmers decision)?

    Read the article

  • What are programming lost arts?

    - by pavpanchekha
    Have you ever programmed raw machine code (not for class)? Examined a hex dump with just a hex editor (or, heck, without)? Written your own software floating-point library? Division library? Written a non-school-assignment in Lisp or Forth? What sort of "lost arts" have been forgotten? And what reason (if any) would there be to resurrect them?

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Significant new inventions in computing since 1980

    - by Alan Kay
    This question arose from comments about different kinds of progress in computing over the last 50 years or so. I was asked by some of the other participants to raise it as a question to the whole forum. Basic idea here is not to bash the current state of things but to try to understand something about the progress of coming up with fundamental new ideas and principles. I claim that we need really new ideas in most areas of computing, and I would like to know of any important and powerful ones that have been done recently. If we can't really find them, then we should ask "Why?" and "What should we be doing?"

    Read the article

  • Who in the software world do you admire the most?

    - by David McGraw
    In an effort to spark some discussion and to find interesting people that I didn't know about, is there anybody around the software industry that you really admire? Perhaps admire is the wrong choice of word, but I'm sure there is somebody out there that has impacted you in a minor way. What did you learn from this individual that defines what you try to achieve today?

    Read the article

  • Login for webapp, needs to be available for support staff

    - by Christian W
    I know the title is a little off, but it's hard to explain the problem in a short sentence. I am the administrator of a legacy webapp that lets users create surveys and distribute them to a group of people. We have two kinds of "users". Authorized licenseholders which does all setup themselves. Clients who just want to have a survey run, but still need a user (because the webapp has "User" as the top entity in a surveyenvironment.) Sometimes users in #1 want us to do the setup for them (which we offer to do). This means that we have to login as them. This is also how we do support: we login as them and then follow them along, guiding them. Which brings me to my dilemma. Currently our security is below par. But this makes it simple for us to do support. We do want to increase our security, and one thing I have been considering is just doing the normal hashing to DB, however, we need to be able to login as a customer, and if they change their password without telling us, and the password is hashed in the db, we have no way of knowing it. So I was thinking of some kind of twoway encryption for the passwords. Either that or some kind of master password. Any suggestions? (The platform is classic ASP... I said it was legacy...)

    Read the article

  • Why is the software world full of status codes?

    - by David V McKay
    Why did programmers ever start using status codes? I mean, I guess I could imagine this might be useful back in the days when a text string was an expensive resource. WAYYY back then. But even after we had megabytes of memory to work with, we continued to use them. What possible advantage could there be for obfuscating the meaning of an error message or status message behind a status code?

    Read the article

  • Why has anybody ever used COBOL?

    - by sarzl
    I know: You and me hate COBOL. I took a look at a lot of code examples and it didn't take me long to know why everybody tries to avoid it. So I really have no idea: Why was COBOL ever used? I mean: Hey - there was Fortran before it, and Fortran looks like a jesus-language compared to COBOL. This isn't argumentative but historical as I'm young and didn't even know about COBOL before 4 months.

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • is there a way to switch bash or zsh from emacs mode to vi mode with a keystroke

    - by Brandon
    I'd like to be able to switch temporarily from emacs mode to vi mode, since vi mode is sometimes better, but I'm usually half-way through typing something before I realize I want I don't want to switch permanently to vi mode, because I normally prefer emacs mode on the command line, mostly because it's what I'm used to, and over the years many of the keystrokes have become second nature. (As an editor I generally use emacs in viper mode, so that I can use both vi and emacs keystrokes, since I found myself accidentally using them in vi all the time, and screwing things up, and because in some cases I find vi keystrokes more memorable and handy, and in other cases emacs.)

    Read the article

  • Why is RAISERROR misspelled? Or is it not?

    - by Jason
    Why isn't RAISERROR spelled RAISEERROR? Where is the second E? I could understand if it were some ancient keyword length constraint, but I wouldn't expect it to be a nine-character limit. Is RAIS or RROR a technical word such that "raise-error" is just a mis-reading? Are its (immediate) origins in a different language? I've searched Google but not finding much on the subject.

    Read the article

  • (For what) Are Fortran, Cobol and Co. used today?

    - by lamas
    I'm a relatively young programmer and so I don't really know much about languages like Fortran or Cobol that have their origins in the beginning of modern informatics. I'm a bit confused because it seems like there are many people out there saying that these two languages are still very alive and being used all over the world whereas others say the opposite. In addition, it seems like there are only very few questions tagged Fortran or Cobol here on stackoverflow. Can someone "demystify" the situation for me? Who uses these senior languages these days and are they even used anymore? Do you have any experiences with one of the languages or do you know something about their latest developments?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >