Search Results

Search found 21559 results on 863 pages for 'template meta programming'.

Page 69/863 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Would the world be a better place if there were only one programming language?

    - by Simon
    Well, perhaps not the world, but would it encourage more-re-use, less replication of basic code, or at least an uplift in what is considered basic code, more time advancing the application science and a greater encouragement to share, a more advanced base of understanding for new programmers, since the language could be taught ubiquitously and patterns of teaching would have emerged which were optimised for students learning etc etc? I think all of those things would make the programming world better and would probably have significant commercial benefit too. This is definitely not a religious debate about which language is best, and is predicated on the notion of some super-being having designed the perfect language to start with, which was improbable, but it strikes me that if, from the beginning, there were only a single programming language we may be further along in terms of the evolution of the software industry and software science. And although it is now impossible, if you buy some or all of these assertions is there an argument for standardising on a single language for the future so we can accelerate our collective progress rather than all of us re-inventing some part of the same wheel and consigning our children to the same fate?

    Read the article

  • New or not so well-known paradigms, syntax features and behaviours of programming languages?

    - by George B
    I've designed some educational programming languages and interpreters for them, but my problem always was that they ended up "normal" and "boring", mostly similar to some kind of existing language (ASM and BASIC). I find it really hard to come up with new ideas for syntax features, "neat things" and new or very modified programming paradigms for it. I always thought that it was hard to come up with good new things not fun/useless new things for this case. I wondered if you could help me out with your creativity: What features in terms of language syntax and built-in functions as well as maybe even new paradigms can I work into my language to keep it useless but more fun, enjoyable, interesting and/or different to program in?

    Read the article

  • What javascript templating engine do you recommend?

    - by flybywire
    Do you have experience with a Javascript templating engine, one that is stable, easy to use and has good performance? I need to do apply the same template many times for different data. I prefer to download the template itself once (and have it cached) rather than processing the template on the server. Also, this way the template itself would be a static resource more easily cached in the server side too.

    Read the article

  • Why is C++ backward compatibility important / necessary?

    - by Giorgio
    As far as understand it is a well-established opinion within the C++ community that C is an obsolete language that was useful 20 years ago but cannot support many modern good programming practices, or even encourages bad practices; certain features that were typical of C++ (C with classes) during the nineties are also obsolete and considered bad practice in modern C++ (e.g., new and delete should be replaced by smart pointer primitives). In view of this, I often wonder why backward compatibility with C and obsolete C++ features is still considered important: to my knowledge there is no 100% compatibility, but most of C and C++ are contained in C++11 as a subset. Of course, there is a lot of legacy code and libraries (possibly containing templates) that are written using a previous standard of the language and which still need to be maintained or used in connection with new code. Nevertheless, maybe it would still be possible to drop obsolete C and C++ features (e.g. the mentioned new / delete) from a future C++ standard so that it is impossible to use them in new code. In this way, old and dangerous programming practices would be quickly banned from new code, and modern, better programming practices would be enforced by the compiler. Legacy code could still be maintained using separate compilation (having C alongside C++ source files is already a common practice). Developers would have to choose between one compiler supporting the old-style C++ that was common during the nineties and a compiler supporting the modern C++? style (the question mark indicates a future, hypothetical revision). Only mixing the two styles would be forbidden. Would this be a viable strategy for encouraging the adoption of modern C++ practices? Are there conceptual reasons or technical problems (e.g. compiling existing templates) that make such a change undesirable or even impossible? Has such a development been proposed in the C++ community. If there has been some extended discussion on the topic, is there any material on-line?

    Read the article

  • Do you think natively compiled languages have reached their EOL?

    - by Yuval A
    If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no real runner-up compiled language in the distance. Do you feel that languages which are natively compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? I think I should clarify: I do not want this to turn into a "which language is better" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess, and In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C. Those people have jobs that require them to write low to mid-level code. I'm sure C is awesome. I'm sure there are a few bad programmers who know C. My question is, why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language--say Java, Pascal, PHP, or Javascript--truely benefitted from a prior knowledge of C? Examples would be most appreciated. (revised to better coincide with the six guidelines.)

    Read the article

  • Java: attributes order in .jsp getting inversed

    - by NoozNooz42
    Every single time I've read about the meta tags, the attribute where in this order for the description: <meta name="description" content="..." /> First name, then content. It's also like that in the Google Webmaster documentation. Basically, it's like that everywhere. Now in a .jsp (in XML notation) I've got the following: <meta name="description" content="${metadesc}"/> So it's name first, then content. Yet on the generated webpage, I get this: <meta content="...(200 chars or so here making it a very long line)..." name="description"/> Somehow the attributes have been inversed. Because the content follows the official W3C and Google recommendations, the content is a bit less than 200 characters long, which makes it a major pain to "visually verify" that the name attribute is correctly there (I've got to scroll). Anyway... Why are these attribute not appearing in the order defined in the .jsp? Can I force them to appear in the same order as I wrote them in my .jsp? I realize the resulting tag may be valid... But I can also imagine a lot of very creative ways to have valid tags which users would be very upset about. Does this make any sense to inverse these attributes? EDIT wow, just wow... If I invert the attributes in my .jsp (that is, writing them in the "wrong" order), then they appear as I want them to appear in the generated web page. (Tomcat 6.0.26 btw)

    Read the article

  • How can I make sense of the word "Functor" from a semantic standpoint?

    - by guillaume31
    When facing new programming jargon words, I first try to reason about them from an semantic and etymological standpoint when possible (that is, when they aren't obscure acronyms). For instance, you can get the beginning of a hint of what things like Polymorphism or even Monad are about with the help of a little Greek/Latin. At the very least, once you've learned the concept, the word itself appears to go along with it well. I guess that's part of why we name things names, to make mental representations and associations more fluent. I found Functor to be a tougher nut to crack. Not so much the C++ meaning -- an object that acts (-or) as a function (funct-), but the various functional meanings (in ML, Haskell) definitely left me puzzled. From the (mathematics) Functor Wikipedia article, it seems the word was borrowed from linguistics. I think I get what a "function word" or "functor" means in that context - a word that "makes function" as opposed to a word that "makes sense". But I can't really relate that to the notion of Functor in category theory, let alone functional programming. I imagined a Functor to be something that creates functions, or behaves like a function, or short for "functional constructor", but none of those seems to fit... How do experienced functional programmers reason about this ? Do they just need any label to put in front of a concept and be fine with it ? Generally speaking, isn't it partly why advanced functional programming is hard to grasp for mere mortals compared to, say, OO -- very abstract in that you can't relate it to anything familiar ? Note that I don't need a definition of Functor, only an explanation that would allow me to relate it to something more tangible, if there is any.

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language—say Java, Pascal, PHP, or Javascript—truely benefitted from a prior knowledge of C? Examples would be most appreciated.

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • What do you wish language designers paid attention to?

    - by Berin Loritsch
    The purpose of this question is not to assemble a laundry list of programming language features that you can't live without, or wish was in your main language of choice. The purpose of this question is to bring to light corners of languge design most language designers might not think about. So, instead of thinking about language feature X, think a little more philisophically. One of my biases, and perhaps it might be controversial, is that the softer side of engineering--the whys and what fors--are many times more important than the more concrete side. For example, Ruby was designed with a stated goal of improving developer happiness. While your opinions may be mixed on whether it delivered or not, the fact that was a goal means that some of the choices in language design were influenced by that philosophy. Please do not post: Syntax flame wars (I could care less whether you use whitespace [Python], keywords [Ruby], or curly braces [Java, C/C++, et. al.] to denote program blocks). That's just an implementation detail. "Any language that doesn't have feature X doesn't deserve to exist" type comments. There is at least one reason for all programming languages to exist--good or bad. Please do post: Philisophical ideas that language designers seem to miss. Technical concepts that seem to be poorly implemented more often than not. Please do provide an example of the pain it causes and if you have any ideas of how you would prefer it to function. Things you wish were in the platform's common library but seldom are. One the same token, things that usually are in a common library that you wish were not. Conceptual features such as built in test/assertion/contract/error handling support that you wish all programming languages would implement properly--and define properly. My hope is that this will be a fun and stimulating topic.

    Read the article

  • Hobbyist transitioning to earn money on paid work?

    - by Chelonian
    I got into hobbyist Python programming some years ago on a whim, having never programmed before other than BASIC way back when, and little by little have cobbled together a, in my opinion, nice little desktop application that I might try to get out there in some fashion someday. It's roughly 15,000 logical lines of code, and includes use of Python, wxPython, SQLite, and a number of other libraries, works on Win and Linux (maybe Mac, untested) and I've gotten some good feedback about the application's virtues from non-programmer friends. I've also done a small application for data collection for animal behavior experiments, and an ad hoc tool to help generate a web page...and I've authored some tutorials. I consider my Python skills to be appreciably limited, my SQL skills to be very limited, but I'm not totally out to sea, either (e.g. I did FizzBuzz in a few minutes, did a "Monty Hall Dilemma" simulator in some minutes, etc.). I also put a strong premium on quality user experience; that is, the look and feel matters much to me and the software looks quite good, I feel. I know no other programming languages yet. I also know the basics of HTML/CSS (not considering them programming languages) and have created an artist's web page (that was described by a friend as "incredibly slick"...it's really not, though), and have a scientific background. I'm curious: Aside from directly selling my software, what's roughly possible--if anything--in terms of earning either side money on gigs, or actually getting hired at some level in the software industry, for someone with this general skill set?

    Read the article

  • Do we ethically have the right to use the MAC Address for verification purposes?

    - by Matt Ridge
    I am writing a program, or starting at the very beginning of it, and I am thinking of purchase verification systems as a final step. I will be catering to Macs, PCs, and possibly Linux if all is said and done. I will also be programming this for smartphones as well using C++ and Objective-C. (I am writing a blueprint before going head first into it) That being said, I am not asking for help on doing it yet, but what I’m looking for is a realistic measurement for what could be expected as a viable and ethical option for purchase verification systems. Apple through the Apple Store, and some other stores out there have their own "You bought it" check. I am looking to use a three prong verification system. Email/password 16 to 32 character serial number using alpha/numeric and symbols with Upper and lowercase variants. MAC Address. The first two are in my mind ok, but I have to ask on an ethical standpoint, is a MAC Address to lock the software to said hardware unethical, or is it smart? I understand if an Ethernet card changes if not part of the logic board, or if the logic board changes so does the MAC address, so if that changes it will have to be re-verified, but I have to ask with how everything is today... Is it ethical to actually use the MAC address as a validation key or no? Should I be forward with this kind of verification system or should I keep it hidden as a secret? Yes I know hackers and others will find ways of knowing what I am doing, but in reality this is why I am asking. I know no verification is foolproof, but making it so that its harder to break is something I've always been interested in, and learning how to program is bringing up these questions, because I don't want to assume one thing and find out it's not really accepted in the programming world as a "you shouldn't do that" maneuver... Thanks in advance... I know this is my first programming question, but I am just learning how to program, and I am just making sure I'm not breaking some ethical programmer credo I shouldn't...

    Read the article

  • How often do you review fundamentals?

    - by mlnyc
    So I've been out of school for a year and a half now. In school, of course we covered all the fundamentals: OS, databases, programming languages (i.e. syntax, binding rules, exception handling, recursion, etc), and fundamental algorithms. the rest were more in-depth topics on things like NLP, data mining, etc. Now, a year ago if you would have told me to write a quicksort, or reverse a singly-linked list, analyze the time complexity of this 'naive' algorithm vs it's dynamic programming counterpart, etc I would have been able to give you a decent and hopefully satisfying answer. But if you would have asked me more real world questions I might have been stumped (things like how would handle logging for an application, or security difference between GET and POST, differences between SQL Server and Oracle SQL, anything I list on my resume as currently working with [jQuery questions, ColdFusion questions, ...] etc) Now, I feel things are the opposite. I haven't wrote my own sort since graduating, and I don't really have to worry much about theoretical things that do not naturally fall into problems I am trying to solve. For example, I might give you some great SQL solutions using an analytical function that I would have otherwise been stumped on or write a cool web application using angular or something but ask me to write an algo for insertAfter(Element* elem) and I might not be able to do it in a reasonable time frame. I guess my question here to the experienced programmers is how do you balance the need to both learn and experiment with new technologies (fun!), working on personal projects (also fun!) working and solving real world problems in a timeboxed environment (so I might reach out to a library that does what I want rather than re-invent the wheel so that I can focus on the problem I am trying to solve) (work, basically), and refreshing on old theoretical material which is still valid for interviews and such (can be a drag)? Do you review older material (such as famous algorithms, dynamic programming, Big-O analysis, locking implementations) regularly or just when you need it? How much time do you dedicate to both in your 'deliberate practice' and do you have a certain to-do list of topics that you want to work on?

    Read the article

  • What kinds of languages would be most useful for this kind of webapp?

    - by Caedar
    I've had some experience with programming in the past (2-3 years of C++ self-teaching), so I'm no stranger to the programming process, but there are so many languages out there that I'm lost when thinking about this project idea that's been floating around my head: I would like to create a webapp that would be used for helping somebody figure out what kinds of productivity tools would suit them. The first part of the app would basically be a survey with a variety of questions that would help weed out tools that wouldn't be useful for them. (Slider bar between minimalist and maximizer, slider bar between all free apps and no cost limit, checkboxes on what platforms are required, etc.) While the person is filling out the survey, they will see a web of applications, webapps, and other tools forming on the screen with links showing the relationships the programs have with eachother (syncing supported, good combinations of apps, etc.), along with a list of applications below sorted by general use (notetaking, document organization, storage, etc.) I would imagine that each program entered into the database that will be accessed would have a certain set of characteristics, ie. price, user friendliness, platforms supported, general uses, etc. and the survey would be designed to correlate to those elements and remove programs that don't match the criteria set. The difficult part of this entire process would be getting the web of applications to arrange itself and render properly. Now that I've finished mind-dumping, onto my question: What kinds/combinations of programming languages would you imagine being useful for this kind of project, and why? I learn best by setting up a project for myself like this one and tinkering with the languages, so I don't mind if the end product is out of reach from my current skill level. I'd just like some guidance so I don't fumble in the dark for too long.

    Read the article

  • If you had to go back and re-learn your skill set, how would you do it?

    - by vorbb
    My younger brother is looking to start programming. He's 14, and technically-inclined, but no real experience programming. He's looking to me for guidance, and I don't feel as if my experience is enough, so I figured I'd ask here. He's more interested in web programming, but also has an interest in desktop/mobile/server applications. What would be a good learning path for him to take? I'm going to buy him a bunch of books for Christmas to get him started; the question is, what should he learn, and in which order? The way I see it, he needs to learn theory and code. I'd like to start him off with Python or Ruby or PHP. If he wants to get in to web, he's also going to need to learn HTML, CSS, Javascript, etc. Out of those three domains (Languages, Theory, Markup/Etc.), what is the best order do you think to learn in? Also, am I missing anything? Thanks!

    Read the article

  • Updating Debian kernel

    - by Devator
    I'm trying to update my Debian machine to 2.6.32-46 (which is the new stable). However, after doing apt-get update my apt-cache search linux-image shows me: linux-headers-2.6.32-5-486 - Header files for Linux 2.6.32-5-486 linux-headers-2.6.32-5-686-bigmem - Header files for Linux 2.6.32-5-686-bigmem linux-headers-2.6.32-5-686 - Header files for Linux 2.6.32-5-686 linux-headers-2.6.32-5-amd64 - Header files for Linux 2.6.32-5-amd64 linux-headers-2.6.32-5-openvz-686 - Header files for Linux 2.6.32-5-openvz-686 linux-headers-2.6.32-5-vserver-686-bigmem - Header files for Linux 2.6.32-5-vser ver-686-bigmem linux-headers-2.6.32-5-vserver-686 - Header files for Linux 2.6.32-5-vserver-686 linux-headers-2.6.32-5-xen-686 - Header files for Linux 2.6.32-5-xen-686 linux-image-2.6.32-5-486 - Linux 2.6.32 for old PCs linux-image-2.6.32-5-686-bigmem-dbg - Debugging infos for Linux 2.6.32-5-686-big mem linux-image-2.6.32-5-686-bigmem - Linux 2.6.32 for PCs with 4GB+ RAM linux-image-2.6.32-5-686 - Linux 2.6.32 for modern PCs linux-image-2.6.32-5-amd64 - Linux 2.6.32 for 64-bit PCs linux-image-2.6.32-5-openvz-686-dbg - Debugging infos for Linux 2.6.32-5-openvz- 686 linux-image-2.6.32-5-openvz-686 - Linux 2.6.32 for modern PCs, OpenVZ support linux-image-2.6.32-5-vserver-686-bigmem-dbg - Debugging infos for Linux 2.6.32-5 -vserver-686-bigmem linux-image-2.6.32-5-vserver-686-bigmem - Linux 2.6.32 for PCs with 4GB+ RAM, Li nux-VServer support linux-image-2.6.32-5-vserver-686 - Linux 2.6.32 for modern PCs, Linux-VServer su pport linux-image-2.6.32-5-xen-686-dbg - Debugging infos for Linux 2.6.32-5-xen-686 linux-image-2.6.32-5-xen-686 - Linux 2.6.32 for modern PCs, Xen dom0 support linux-image-2.6-486 - Linux 2.6 for old PCs (meta-package) linux-image-2.6-686-bigmem - Linux 2.6 for PCs with 4GB+ RAM (meta-package) linux-image-2.6-686 - Linux 2.6 for modern PCs (meta-package) linux-image-2.6-amd64 - Linux 2.6 for 64-bit PCs (meta-package) linux-image-2.6-openvz-686 - Linux 2.6 for modern PCs (meta-package), OpenVZ sup port linux-image-2.6-vserver-686-bigmem - Linux 2.6 for PCs with 4GB+ RAM (meta-packa ge), Linux-VServer support linux-image-2.6-vserver-686 - Linux 2.6 for modern PCs (meta-package), Linux-VSe rver support linux-image-2.6-xen-686 - Linux 2.6 for modern PCs (meta-package), Xen dom0 supp ort linux-image-486 - Linux for old PCs (meta-package) linux-image-686-bigmem - Linux for PCs with 4GB+ RAM (meta-package) linux-image-686 - Linux for modern PCs (meta-package) linux-image-amd64 - Linux for 64-bit PCs (meta-package) linux-image-openvz-686 - Linux for modern PCs (meta-package), OpenVZ support linux-image-vserver-686-bigmem - Linux for PCs with 4GB+ RAM (meta-package), Lin ux-VServer support linux-image-vserver-686 - Linux for modern PCs (meta-package), Linux-VServer sup port linux-image-xen-686 - Linux for modern PCs (meta-package), Xen dom0 support So, 2.6.32-46 doesn't seem to be found. How can I update to this kernel? My sources.list: ###### Debian Main Repos deb http://ftp.nl.debian.org/debian/ squeeze main contrib deb-src http://ftp.nl.debian.org/debian/ squeeze main contrib ###### Debian Update Repos deb http://security.debian.org/ squeeze/updates main contrib deb http://ftp.nl.debian.org/debian/ squeeze-proposed-updates main contrib deb-src http://security.debian.org/ squeeze/updates main contrib deb-src http://ftp.nl.debian.org/debian/ squeeze-proposed-updates main contrib

    Read the article

  • Programming doesn&rsquo;t have to be Magic

    - by Wes McClure
    In the show LOST, the Swan Station had a button that “had to be pushed” every 100 minutes to avoid disaster.  Several characters in the show took it upon themselves to have faith and religiously push the button, resetting the clock and averting the unknown “disaster”.  There are striking similarities in this story to the code we write every day.  Here are some common ones that I encounter: “I don’t know what it does but the application doesn’t work without it” “I added that code because I saw it in other similar places, I didn’t understand it, but thought it was necessary.” (for consistency, or to make things “work”) “An error message recommended it” “I copied that code” (and didn’t look at what it was doing) “It was suggested in a forum online and it fixed my problem so I left it” In all of these cases we haven’t done our due diligence to understand what the code we are writing is actually doing.  In the rush to get things done it seems like we’re willing to push any button (add any line of code) just to get our desired result and move on.  All of the above explanations are common things we encounter, and are valid ways to work through a problem we have, but when we find a solution to a task we are working on (whether a bug or a feature), we should take a moment to reflect on what we don’t understand.  Remove what isn’t necessary, comprehend and simplify what is.  Why is it detrimental to commit code we don’t understand? Perpetuates unnecessary code If you copy code that isn’t necessary, someone else is more likely to do so, especially peers Perpetuates tech debt Adding unnecessary code leads to extra code that must be understood, maintained and eventually cleaned up in longer lived projects Tech debt begets tech debt as other developers copy or use this code as guidelines in similar situations Increases maintenance How do we know the code is simplified if we don’t understand it? Perpetuates a lack of ownership Makes it seem ok to commit anything so long as it “gets the job done” Perpetuates the notion that programming is magic If we don’t take the time to understand every line of code we add, then we are contributing to the notion that it is simply enough to make the code work, regardless of how. TLDR Don’t commit code that you don’t understand, take the time to understand it, simplify it and then commit it!

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >