Search Results

Search found 16688 results on 668 pages for 'expression language'.

Page 153/668 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Is it kEatSpeed or kSpeedEat?

    - by bobobobo
    I have a bunch of related constants that are not identical. What's the better way to name them? way #1 kWalkSpeed kRunSpeed kEatSpeed kDrinkSpeed Or, way #2 kSpeedWalk kSpeedRun kSpeedEat kSpeedDrink If we evaluate these based on readability understandability not bug prone with subtle errors due to using wrong variable name I think way #1 wins readability, they tie for understandability, and way #2 wins for not bug prone. I'm not sure how often it happens to others, but when variable names like this get long, then its easy to write kSpeedEatingWhenInAHurry when you really meant kSpeedEatingWhenInHome, especially when using autocomplete. Any perspectives?

    Read the article

  • C++ missing feature: variables with mutating names

    - by user345671
    Hello. Can someone please add a feature to C++? I would like to change variable names depending on the context. Example: Class stuff; if(stuff.hasSuperPowers()) { stuff becomes stuffWithSuperPowers; } That because creating a new variable and using an assignment sucks and lets stuff be acessible in that scope anyway. If you do it, please keep the becomes syntax unless it doesn't look good enough for you as a native English speaker. Thanks in advance.

    Read the article

  • Implementing Transparent Persistence

    - by Jules
    Transparent persistence allows you to use regular objects instead of a database. The objects are automatically read from and written to disk. Examples of such systems are Gemstone and Rucksack (for common lisp). Simplified version of what they do: if you access foo.bar and bar is not in memory, it gets loaded from disk. If you do foo.bar = baz then the foo object gets updated on disk. Most systems also have some form of transactions, and they may have support for sharing objects across programs and even across a network. My question is what are the different techniques for implementing these kind of systems and what are the trade offs between these implementation approaches?

    Read the article

  • Creating huge images

    - by David Rutten
    My program has the feature to export a hi-res image of the working canvas to the disk. Users will frequently try to export images of about 20,000 x 10,000 pixels @ 32bpp which equals about 800MB. Add that to the serious memory consumption already going on in your average 3D CAD program and you'll pretty much guarantee an out-of-memory crash on 32-bit platforms. So now I'm exporting tiles of 1000x1000 pixels which the user has to stitch together afterwards in a pixel editor. Is there a way I can solve this problem without the user doing any work? I figured I could probably write a small exe that gets command-lined into the process and performs the stitching automatically. It would be a separate process and it would thus have 2GB of ram all to itself. Or is there a better way still? I'd like to support jpg, png and bmp so writing the image as a bytestream to the disk is not really possible.

    Read the article

  • Do similar passwords have similar hashes?

    - by SLC
    Our computer system at work requires users to change their password every few weeks, and you cannot have the same password as you had previously. It remembers something like 20 of your last passwords. I discovered most people simply increment a digit at the end of their password, so "thisismypassword1" becomes "thisismypassword2" then 3, 4, 5 etc. Since all of these passwords are stored somewhere, I wondered if there was any weakness in the hashes themselves, for standard hashing algorithms used to store passwords like MD5. Could a hacker increase their chances of brute-forcing the password if they have a list of hashes of similar passwords?

    Read the article

  • Unsigneds in order to prevent negative numbers

    - by Bruno Brant
    let's rope I can make this non-sujective Here's the thing: Sometimes, on fixed-typed languages, I restrict input on methods and functions to positive numbers by using the unsigned types, like unsigned int or unsigned double, etc. Most libraries, however, doesn't seem to think that way. Take C# string.Length. It's a integer, even though it can never be negative. Same goes for C/C++: sqrt input is an int or a double. I know there are reasons for this ... for example your argument might be read from a file and (no idea why) you may prefer to send the value directly to the function and check for errors latter (or use a try-catch block). So, I'm assuming that libraries are way better designed than my own code. So what are the reasons against using unsigned numbers to represent positive numbers? It's because of overflow when we cast then back to signed types?

    Read the article

  • Is XML-RPC bad used as a protocol for a public API implementation?

    - by Jack Duluoz
    I need to implement a web API for a project I'm working on in this period. I read there are many standard protocols to do it: XML-RPC, SOAP, REST. Apparently, the XML-RPC one is the easiest one to implement and use from what I saw, but I didn't find anything about using it to implement an API. Instead I found many tutorial about creating a REST API in PHP, for example. Is there any counter-indication for using XML-RPC to implement a public web API? Also, more generally speaking, I could (sort of) define a custom protocol for my API, to keep things simpler (i.e. accepting only GET request containing the parameters I need): would this be so bad? Is using a standard protocol a must-do?

    Read the article

  • Which term to use when referring to functional data structures: persistent or immutable?

    - by Bob
    In the context of functional programming which is the correct term to use: persistent or immutable? When I Google "immutable data structures" I get a Wikipedia link to an article on "Persistent data structure" which even goes on to say: such data structures are effectively immutable Which further confuses things for me. Do functional programs rely on persistent data structures or immutable data structures? Or are they always the same thing?

    Read the article

  • How does C++ free the memory when a constructor throws an exception and a custom new is used

    - by Joshua
    I see the following constructs: new X will free the memory if X constructor throws. operator new() can be overloaded. The canonical definition of an operator new overload is void *operator new(heap h) and the corrisponding operator delete. The most common operator new overload is pacement new, which is void *operator new(void *p) { return p; } You almost always cannot call delete on the pointer given to placement new. This leads to a single question. How is memory cleaned up when X constructor throws and an overloaded new is used?

    Read the article

  • Determining the order of a list of numbers (possibly without sorting)

    - by Victor Liu
    I have an array of unique integers (e.g. val[i]), in arbitrary order, and I would like to populate another array (ord[i]) with the the sorted indexes of the integers. In other words, val[ord[i]] is in sorted order for increasing i. Right now, I just fill in ord with 0, ..., N, then sort it based on the value array, but I am wondering if we can be more efficient about it since ord is not populated to begin with. This is more of a question out of curiousity; I don't really care about the extra overhead from having to prepopulate a list and then sort it (it's small, I use insertion sort). This may be a silly question with an obvious answer, but I couldn't find anything online.

    Read the article

  • why is optional chaining required in an if let statement

    - by b-ryan ca
    Why would the Swift compiler expect me to write if let addressNumber = paul.residence?.address?.buildingNumber?.toInt() { } instead of just writing: if let addressNumber = paul.residence.address.buildingNumber.toInt() { } The compiler clearly has the static type information to handle the conditional statement for the first dereference of the optional value and each following value. Why would it not continue to do so for the following statements?

    Read the article

  • Why is forwarding variadic parameters invalid?

    - by awesomeyi
    Consider the variadic function parameter: func foo(bar:Int...) -> () { } Here foo can accept multiple arguments, eg foo(5,4). I am curious about the type of Int... and its supported operations. For example, why is this invalid? func foo2(bar2:Int...) -> () { foo(bar2); } Gives a error: Could not find an overload for '_conversion' that accepts the supplied arguments Why is forwarding variadic parameters invalid? What is the "conversion" the compiler is complaining about?

    Read the article

  • What do you enjoy about programming?

    - by Earlz
    Some of us here(or is it just me?) enjoy programming. Even if we're not being paid for it, and in some cases, even though the end result will not do anything for us. For example, many people do the Project Euler problems just for fun, and in the end nothing was really "accomplished" materially. What is it that makes us enjoy programming? How is programming different from another job? You don't see an accountant going home to do some accounting on their own time just for the pure joy of it. How are we different? (also, if anyone has some ideas on how to tag this, then please do correct it for me.. )

    Read the article

  • Rationale of C# iterators design (comparing to C++)

    - by macias
    I found similar topic: http://stackoverflow.com/questions/56347/iterators-in-c-stl-vs-java-is-there-a-conceptual-difference Which basically deals with Java iterator (which is similar to C#) being unable to going backward. So here I would like to focus on limits -- in C++ iterator does not know its limit, you have by yourself compare the given iterator with the limit. In C# iterator knows more -- you can tell without comparing to any external reference, if the iterator is valid or not. I prefer C++ way, because once having iterator you can set any iterator as a limit. In other words if you would like to get only few elements instead of entire collection, you don't have to alter the iterator (in C++). For me it is more "pure" (clear). But of course MS knew this and C++ while designing C#. So what are the advantages of C# way? Which approach is more powerful (which leads to more elegant functions based on iterators). What do I miss? If you have thoughts on C# vs. C++ iterators design other than their limits (boundaries), please also answer. Note: (just in case) please, keep the discussion strictly technical. No C++/C# flamewar.

    Read the article

  • Flow Based Programming

    - by Software Monkey
    I have been doing a little reading on Flow Based Programming over the last few days. There is a wiki which provides further detail. And wikipedia has a good overview on it too. My first thought was, "Great another proponent of lego-land pretend programming" - a concept harking back to the late 80's. But, as I read more, I must admit I have become intrigued. Have you used FBP for a real project? What is your opinion of FBP? Does FBP have a future? In some senses, it seems like the holy grail of reuse that our industry has pursued since the advent of procedural languages.

    Read the article

  • Using regex to add leading zeroes

    - by hgpc
    I would like to add a certain number of leading zeroes (say up to 3) to all numbers of a string. For example: Input: /2009/5/song 01 of 3 Output: /2009/0005/song 0001 of 0003 What's the best way to do this with regular expressions?

    Read the article

  • Are there any context-sensitive code search tools?

    - by Vicky
    I have been getting very frustrated recently in dealing with a massive bulk of legacy code which I am trying to get familiar with. Say I try to search for a particular function call, I get loads of results that turn out to be completely irrelevant; some of them are easy to spot, eg a comment saying // Fixed functionality in foo() so don't need to handle this here any more But others are much harder to spot manually, because they turn out to be calls from other functions in modules that are only compiled in certain cases, or are part of a much larger block of code that is #if 0'd out in its entirety. What I'd like would be a search tool that would allow me to search for a term and give me the choice to include or exclude commented out or #if 0'd out code. Then the search results would be displayed alongside a list of #defines that are required in order for that snippet of code to be relevant. I'm working in C / C++, but other than the specific comment syntax I guess the techniques should be more generally applicable. Does such a tool exist?

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Why use buffers to read/write Streams

    - by James Hay
    Following reading various questions on reading and writing Streams, all the various answers define something like this as the correct way to do it: private void CopyStream(Stream input, Stream output) { byte[] buffer = new byte[16 * 1024]; int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) { output.Write(buffer, 0, read); } } Two questions: Why read and write in these smaller chunks? What is the significance of the buffer size used?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >