Search Results

Search found 177198 results on 7088 pages for 'not programming'.

Page 48/7088 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • How do you find time for improving your programming skills?

    - by Snehal
    I'm a Java/J2ee programmer working in India. I'm very passionate about programming and I constantly strive to hone my programming skills by reading blogs, solving Project euler questions, learning new technologies, developing small apps etc;. But I find it very difficult to manage my time. Working for 12 hrs a day in office leaves me stressed out and spend my weekends with my family. So i hardly have like 5-6 hrs per week to actually work on something of my interest which will help me improve. How do you manage time so that you find time to improve your current standing? EDIT: 12 hours includes 1hour of travel & 1 hr of break(lunch/coffee). Effectively I work for 10 hours per day in office which is mandated by my organization. -Snehal

    Read the article

  • What features are important in a programming language for young beginners?

    - by NoMoreZealots
    I was talking with some of the mentors in a local robotics competition for 7th and 8th level kids. The robot was using PBASIC and the parallax Basic Stamp. One of the major issues was this was short term project that required building the robot, teaching them to program in PBASIC and having them program the robot. All in only 2 hours or so a week over a couple months. PBASIC is kinda nice in that it has built in features to do everything, but information overload is possible to due this. My thought are simplicity is key. When you have kids struggling to grasp: if X>10 then <DOSOMETHING> There is not much point in throwing "proper" object oriented programming at them. What are the essentials needed to foster an interest in programming?

    Read the article

  • What's the best way to do literate programming in Python on Windows?

    - by JasonFruit
    I've been playing with various ways of doing literate programming in Python. I like noweb, but I have two main problems with it: first, it is hard to build on Windows, where I spend about half my development time; and second, it requires me to indent each chunk of code as it will be in the final program --- which I don't necessarily know when I write it. I don't want to use Leo, because I'm very attached to Emacs. Is there a good literate programming tool that: Runs on Windows Allows me to set the indentation of the chunks when they're used, not when they're written Still lets me work in Emacs Thanks! Correction: noweb does allow me to indent later --- I misread the paper I found on it. By default, notangle preserves whitespace and maintains indentation when expanding chunks. It can therefore be used with languages like Miranda and Haskell, in which indentation is significant That leaves me with only the "Runs on Windows" problem.

    Read the article

  • What features are important in a programming language for beginners?

    - by NoMoreZealots
    I was talking with some of the mentors in a local robotics competition for 7th and 8th level kids. The robot was using PBASIC and the parallax Basic Stamp. One of the major issues was this was short term project that required building the robot, teaching them to program in PBASIC and having them program the robot. All in only 2 hours or so a week over a couple months. PBASIC is kinda nice in that it has built in features to do everything, but information overload is possible to due this. My thought are simplicity is key. When you have kids struggling to grasp: if X10 then There is not much point in throwing "proper" object oriented programming at them. What are the essentials needed to foster an interest in programming?

    Read the article

  • What are the best uses for each programming language?

    - by VirtuosiMedia
    I come from a web developer background, so I'm fairly familiar with PHP and JavaScript, but I'd eventually like to branch out into other languages. At this point, I don't have a particular direction or platform that I'm leaning toward as far as learning a new language or what I would use it for, but I would like to learn a little bit more about programming languages in general and what each one is used for. I've often heard (and I agree) that you should use the right tool for the job, so what jobs are each programming language best suited for? Edit: If you've worked with some of the newer or more obscure languages, please share for those as well.

    Read the article

  • What is a good programming language for testers who are not great programmers?

    - by Brian T Hannan
    We would like to create some simple automated tests that will be created and maintained by testers. Right now we have a tester who can code in any language, but in the future we might want any tester with a limited knowledge of programming to be able to add or modify the tests. What is a good programming language for testers who are not great programmers, or programmers at all? Someone suggested LUA, but I looked into LUA and it might be more complicated that another language would be. Preferably, the language will be interpreted and not be compiled. Let me know what you think.

    Read the article

  • Would the world be a better place if there were only one programming language?

    - by Simon
    Well, perhaps not the world, but would it encourage more-re-use, less replication of basic code, or at least an uplift in what is considered basic code, more time advancing the application science and a greater encouragement to share, a more advanced base of understanding for new programmers, since the language could be taught ubiquitously and patterns of teaching would have emerged which were optimised for students learning etc etc? I think all of those things would make the programming world better and would probably have significant commercial benefit too. This is definitely not a religious debate about which language is best, and is predicated on the notion of some super-being having designed the perfect language to start with, which was improbable, but it strikes me that if, from the beginning, there were only a single programming language we may be further along in terms of the evolution of the software industry and software science. And although it is now impossible, if you buy some or all of these assertions is there an argument for standardising on a single language for the future so we can accelerate our collective progress rather than all of us re-inventing some part of the same wheel and consigning our children to the same fate?

    Read the article

  • recv receiving not whole data sometime

    - by milo
    hi all, i have following issue: here is the chunk of code: void get_all_buf(int sock, std::string & inStr) { int n = 1; char c; char temp[1024*1024]; bzero(temp, sizeof(temp)); n = recv(sock, temp, sizeof(temp), 0); inStr = temp; }; but sometimes recv returning not whole data (data length always less then sizeof(temp)), only it's part. write side always sends me whole data (i got it with sniffer). what matter? thx. P.S. i know, good manner suggests me to check n (if (n < 0) perror ("error while receiving data), but it doesn't matter now - it's not reason of my problem. P.S.2 i've forgot - it's blocking socket.

    Read the article

  • New or not so well-known paradigms, syntax features and behaviours of programming languages?

    - by George B
    I've designed some educational programming languages and interpreters for them, but my problem always was that they ended up "normal" and "boring", mostly similar to some kind of existing language (ASM and BASIC). I find it really hard to come up with new ideas for syntax features, "neat things" and new or very modified programming paradigms for it. I always thought that it was hard to come up with good new things not fun/useless new things for this case. I wondered if you could help me out with your creativity: What features in terms of language syntax and built-in functions as well as maybe even new paradigms can I work into my language to keep it useless but more fun, enjoyable, interesting and/or different to program in?

    Read the article

  • Why are 20% of keywords still provided when Google is using HTTPS across the board?

    - by Rajesh Magar
    Most of the searches that appear in my analytics are "not provided" because Google has encrypted their all searches. However, if all search results are now encrypted with HTTPS protocol then how is Google analytics still able to track some (20%) of the organic keywords details? There are still some keywords appearing in my organic keywords section. So how did Google analytics do this tracking? Does it bypass the HTTPS restrictions for the referrer?

    Read the article

  • Why is C++ backward compatibility important / necessary?

    - by Giorgio
    As far as understand it is a well-established opinion within the C++ community that C is an obsolete language that was useful 20 years ago but cannot support many modern good programming practices, or even encourages bad practices; certain features that were typical of C++ (C with classes) during the nineties are also obsolete and considered bad practice in modern C++ (e.g., new and delete should be replaced by smart pointer primitives). In view of this, I often wonder why backward compatibility with C and obsolete C++ features is still considered important: to my knowledge there is no 100% compatibility, but most of C and C++ are contained in C++11 as a subset. Of course, there is a lot of legacy code and libraries (possibly containing templates) that are written using a previous standard of the language and which still need to be maintained or used in connection with new code. Nevertheless, maybe it would still be possible to drop obsolete C and C++ features (e.g. the mentioned new / delete) from a future C++ standard so that it is impossible to use them in new code. In this way, old and dangerous programming practices would be quickly banned from new code, and modern, better programming practices would be enforced by the compiler. Legacy code could still be maintained using separate compilation (having C alongside C++ source files is already a common practice). Developers would have to choose between one compiler supporting the old-style C++ that was common during the nineties and a compiler supporting the modern C++? style (the question mark indicates a future, hypothetical revision). Only mixing the two styles would be forbidden. Would this be a viable strategy for encouraging the adoption of modern C++ practices? Are there conceptual reasons or technical problems (e.g. compiling existing templates) that make such a change undesirable or even impossible? Has such a development been proposed in the C++ community. If there has been some extended discussion on the topic, is there any material on-line?

    Read the article

  • Do you think natively compiled languages have reached their EOL?

    - by Yuval A
    If we look at the major programming languages in use today it is pretty noticeable that the vast majority of them are, in fact, interpreted. Looking at the largest piece of the pie we have Java and C# which are both enterprise-ready, heavy-duty, serious programming languages which are basically compiled to byte-code only to be interpreted by their respective VMs (the JVM and the CLR). If we look at scripting languages, we have Perl, Python, Ruby and Lua which are all interpreted (either from code or from bytecode - and yes, it should be noted that they are absolutely not the same). Looking at compiled languages we have C which is nowadays used in embedded and low-level, real-time environments, and C++ which is still alive and kicking, when you want to get down to serious programming as close to the hardware as you can, but still have some nice abstractions to help you with day to day tasks. Basically, there is no real runner-up compiled language in the distance. Do you feel that languages which are natively compiled to executable, binary code are a thing of the past, taken over by interpreted languages which are much more portable and compatible? Does C++ mark an end of an era? Why don't we see any new compiled languages anymore? I think I should clarify: I do not want this to turn into a "which language is better" discussion, because that is not the issue at hand. The languages I gave as example are only examples. Please focus on the question I raised, and if you disagree with my statement that compiled languages are less frequent these days, that is totally fine, I am more than happy to be proved mistaken.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess, and In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C. Those people have jobs that require them to write low to mid-level code. I'm sure C is awesome. I'm sure there are a few bad programmers who know C. My question is, why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language--say Java, Pascal, PHP, or Javascript--truely benefitted from a prior knowledge of C? Examples would be most appreciated. (revised to better coincide with the six guidelines.)

    Read the article

  • How can I make sense of the word "Functor" from a semantic standpoint?

    - by guillaume31
    When facing new programming jargon words, I first try to reason about them from an semantic and etymological standpoint when possible (that is, when they aren't obscure acronyms). For instance, you can get the beginning of a hint of what things like Polymorphism or even Monad are about with the help of a little Greek/Latin. At the very least, once you've learned the concept, the word itself appears to go along with it well. I guess that's part of why we name things names, to make mental representations and associations more fluent. I found Functor to be a tougher nut to crack. Not so much the C++ meaning -- an object that acts (-or) as a function (funct-), but the various functional meanings (in ML, Haskell) definitely left me puzzled. From the (mathematics) Functor Wikipedia article, it seems the word was borrowed from linguistics. I think I get what a "function word" or "functor" means in that context - a word that "makes function" as opposed to a word that "makes sense". But I can't really relate that to the notion of Functor in category theory, let alone functional programming. I imagined a Functor to be something that creates functions, or behaves like a function, or short for "functional constructor", but none of those seems to fit... How do experienced functional programmers reason about this ? Do they just need any label to put in front of a concept and be fine with it ? Generally speaking, isn't it partly why advanced functional programming is hard to grasp for mere mortals compared to, say, OO -- very abstract in that you can't relate it to anything familiar ? Note that I don't need a definition of Functor, only an explanation that would allow me to relate it to something more tangible, if there is any.

    Read the article

  • I don't know C. And why should I learn it?

    - by Stephen
    My first programming language was PHP (gasp). After that I started working with JavaScript. I've recently done work in C#. I've never once looked at low or mid level languages like C. The general consensus in the programming-community-at-large is that "a programmer who hasn't learned something like C, frankly, just can't handle programming concepts like pointers, data types, passing values by reference, etc." I do not agree. I argue that: Because high level languages are easily accessible, more "non-programmers" dive in and make a mess In order to really get anything done in a high level language, one needs to understand the same similar concepts that most proponents of "learn-low-level-first" evangelize about. Some people need to know C; those people have jobs that require them to write low to mid-level code. I'm sure C is awesome, and I'm sure there are a few bad programmers who know C. Why the bias? As a good, honest, hungry programmer, if I had to learn C (for some unforeseen reason), I would learn C. Considering the multitude of languages out there, shouldn't good programmers focus on learning what advances us? Shouldn't we learn what interests us? Should we not utilize our finite time moving forward? Why do some programmers disagree with this? I believe that striving for excellence in what you do is the fundamental deterministic trait between good programmers and bad ones. Does anyone have any real world examples of how something written in a high level language—say Java, Pascal, PHP, or Javascript—truely benefitted from a prior knowledge of C? Examples would be most appreciated.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >