Search Results

Search found 12688 results on 508 pages for 'swift language'.

Page 92/508 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Where does "foo" come from in coding examples? [closed]

    - by ThePower
    Possible Duplicates: Using “Foo” and “Bar” in examples To foo bar, or not to foo bar: that is the question. Possible Duplicates: Using "Foo" and "Bar" in examples To foo bar, or not to foo bar: that is the question. Bit of a general question here, but it's something I would like to know! Whenever I am looking for resolutions to my C# problems online, I always come across "foo" being used as an example. Does this represent anything or is it just one of those unexplained catchy object names, used by many people in examples?

    Read the article

  • Find location using only distance and range?

    - by pinnacler
    Triangulation works by checking your angle to three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and it's to my right at 90 degrees." Repeat 2 more times for different targets and angles. Trilateration works by checking your distance from three KNOWN targets. "I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and I'm 100 meters away from that." Repeat 2 more times for different targets and ranges. But both of those methods rely on knowing WHAT you're looking at. Say you're in a forest and you can't differentiate between trees, but you know where key trees are. These trees have been hand picked as "landmarks." You have a robot moving through that forest slowly. Do you know of any ways to determine location based solely off of angle and range, exploiting geometry between landmarks? Note, you will see other trees as well, so you won't know which trees are key trees. Ignore the fact that a target may be occluded. Our pre-algorithm takes care of that. 1) If this exists, what's it called? I can't find anything. 2) What do you think the odds are of having two identical location 'hits?' I imagine it's fairly rare. 3) If there are two identical location 'hits,' how can I determine my exact location after I move the robot next. (I assume the chances of having 2 occurrences of EXACT angles in a row, after I reposition the robot, would be statistically impossible, barring a forest growing in rows like corn). Would I just calculate the position again and hope for the best? Or would I somehow incorporate my previous position estimate into my next guess? If this exists, I'd like to read about it, and if not, develop it as a side project. I just don't have time to reinvent the wheel right now, nor have the time to implement this from scratch. So if it doesn't exist, I'll have to figure out another way to localize the robot since that's not the aim of this research, if it does, lets hope it's semi-easy.

    Read the article

  • Theory of formal languages - Automaton

    - by dader51
    Hi everybody ! I'm wondering about formal languages. I have a kind of parser : It reads à xml-like serialized tree structure and turn it into a multidimmensionnal array. I figured out that i need at least three variables to achieve the job : $tree = array(); // a new array $pTree = array(&$tree); // a new array which the first element points to $tree; $deep = 0; plus the one containing the sentence splitted into words. My point is on the similarities between the algorithm deing used and the differents kinds of automatons ( state machines turing machines stack ... ). The $words variable is the "tape" of the automaton, the test/conditions of the algorithm are transitions, $deep is the state and $tree is the output. I cannont figure what is $pTree. So the question is : which is the automaton I implictly use here, and to which formal languages family does it fit ? And what's about recursion ?

    Read the article

  • Autotesting a network interface

    - by Machado
    Hi All, I'm developing a software component responsible for testing if a network interface has conectivity with the internet. Think of it as the same test the XBOX360 does to inform the user if it's connected with the Live network (just as an example). So far I figured the autotest would run as this: 1) Test the physical network interface (if the cable is conected, has up/downlink, etc...) 2) Test the logical network (has IP address, has DNS, etc...) 3) Connects to the internet (can access google, for example) 4) ??? 5) Profit! (just kidding...) My question relates to step 3: How can I detect, correctly, if my software has connection with the internet ? Is there any fixed IP address to ping ? The problem is that I don't want to rely solely on google.com (or any other well-known address), as those can change in time, and my component will be embbeded on a mobile device, not easy to update. Any suggestions ?

    Read the article

  • Is there any way to set or code breakpoints conditionally?

    - by froadie
    I've been wondering this for a while - is there a way to code/program breakpoints...? Conditionally? For example, can I specify something like - "when this variable becomes this value, break and open the debugger"? (Would be quite useful, especially in long loops when you want to debug loop execution of a late loop value.) I suppose this may be IDE-specific since debugging is implemented differently in different IDEs... I'd be interested to know how to do this in any IDE, but specifically in Eclipse and Visual Studio.

    Read the article

  • How to learn as a lone developer?

    - by fearofawhackplanet
    I've been lucky to work in a small team with a couple of experienced and knowledgeable developers for the first year of my career. I've learned a huge amount. But I'm now getting transferred within my company, and will be working on solo projects. I'll cope, but I know I'll make mistakes and won't always produce the best solutions without someone to guide me and review my output. I'm wondering if anyone has any tips in this situation. How can I keep learning? What's the best way to monitor and asses the quality of my work? How can I ensure that my career and skills don't stagnate?

    Read the article

  • When is performance gain significant enough to implement that optimization?

    - by Zwei steinen
    Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization. For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall? I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

    Read the article

  • Encode complex number as RGB pixel and back

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that? Edit: Looks like I just need to convert RGB to HSB and back. Edit 2: Existing RGB - HSV converter fragment: if (hsv.sat == 0) { hsv.hue = 0; // ! return hsv; } I don't want 0. I want random. And not just if hsv.sat==0, but if it is lower that it should be ("should be" means maximum saturation, saturation that is after transformation from complex number).

    Read the article

  • Is it immoral to write crappy code even if readability and correctness is not a requirement?

    - by mafutrct
    There are cases when crappy (i.e. unreadable and buggy) code is not much of a problem. For instance, imagine you need to generate a big text file that mostly follows a simple pattern with a few very complex exceptions. What do you do? You quickly write a simple algorithm and insert the exceptional bits in the output manually to save 4 hours. The code is unreadable, and the output is flawed, but it's still the correct way since it is way faster. But let's get this straight: I hate bad code. I've had to read and work with code that caused my stomach to hurt. I care a lot about good code. And actually, I caught myself thinking that it is immoral to write bad code even though the dirty approach is sometimes superior. I was surprised by myself and found my idea to be very irrational. Did you ever experience this? Should I just get rid of this stupid idea and use the most efficient approach to coding?

    Read the article

  • Are there any well known algorithms to detect the presence of names?

    - by Rhubarb
    For example, given a string: "Bob went fishing with his friend Jim Smith." Bob and Jim Smith are both names, but bob and smith are both words. Weren't for them being uppercase, there would be less indication of this outside of our knowledge of the sentence. Without doing grammar analysis, are there any well known algorithms for detecting the presence of names, at least Western names?

    Read the article

  • Should I obscure primary key values?

    - by Scott
    I'm building a web application where the front end is a highly-specialized search engine. Searching is handled at the main URL, and the user is passed off to a sub-directory when they click on a search result for a more detailed display. This hand-off is being done as a GET request with the primary key being passed in the query string. I seem to recall reading somewhere that exposing primary keys to the user was not a good idea, so I decided to implement reversible encryption. I'm starting to wonder if I'm just being paranoid. The reversible encryption (base64) is probably easily broken by anybody who cares to try, makes the URLs very ugly, and also longer than they otherwise would be. Should I just drop the encryption and send my primary keys in the clear?

    Read the article

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • Find all complete sub-graphs within a graph

    - by mvid
    Is there a known algorithm or method to find all complete sub-graphs within a graph? I have an undirected, unweighted graph and I need to find all subgraphs within it where each node in the subgraph is connected to each other node in the subgraph. Is there an existing algorithm for this?

    Read the article

  • Using different languages in one project

    - by Tarbal
    I recently heard about the use of several different languages in a (big) project, I also read about famous services such as Twitter using Rails as frontend, mixed with some other languages, and Scala I think it was as backend. Is this common practice? Who does that? I'm sure there are disadvantages to this. I think that you will have problems with the different interpreters/compilers and seamlessly connecting the different languages. Is this true? Why is this actually done? For performance?

    Read the article

  • As our favorite imperative languages gain functional constructs, should loops be considered a code s

    - by Michael Buen
    In allusion to Dare Obasanjo's impressions on Map, Reduce, Filter (Functional Programming in C# 3.0: How Map/Reduce/Filter can Rock your World) "With these three building blocks, you could replace the majority of the procedural for loops in your application with a single line of code. C# 3.0 doesn't just stop there." Should we increasingly use them instead of loops? And should be having loops(instead of those three building blocks of data manipulation) be one of the metrics for coding horrors on code reviews? And why? [NOTE] I'm not advocating fully functional programming on those codes that could be simply translated to loops(e.g. tail recursions) Asking for politer term. Considering that the phrase "code smell" is not so diplomatic, I posted another question http://stackoverflow.com/questions/432492/whats-the-politer-word-for-code-smell about the right word for "code smell", er.. utterly bad code. Should that phrase have a place in our programming parlance?

    Read the article

  • Have you been in cases where TDD increased development time?

    - by BillyONeal
    Hello everyone :) I was reading http://stackoverflow.com/questions/2512504/tdd-how-to-start-really-thinking-tdd and I noticed many of the answers indicate that tests + application should take less time than just writing the application. In my experience, this is not true. My problem though is that some 90% of the code I write has a TON of operating system calls. The time spent to actually mock these up takes much longer than just writing the code in the first place. Sometimes 4 or 5 times as long to write the test as to write the actual code. I'm curious if there are other developers in this kind of a scenario.

    Read the article

  • Interface for classes that have nothing in common

    - by Tomek Tarczynski
    Lets say I want to make few classes to determine behaviour of agents. The good practice would be to make some common interface for them, such interface (simplified) could look like this: interface IModel { void UpdateBehaviour(); } All , or at least most, of such model would have some parameters, but parameters from one model might have nothing in common with parameters of other model. I would like to have some common way of loading parameters. Question What is the best way to do that? Is it maybe just adding method void LoadParameters(object parameters) to the IModel? Or creating empty interface IParameters and add method void LoadParameters(IParameters parameters)? That are two ideas I came up with, but I don't like either of them.

    Read the article

  • Are licenses relevant for small code snippets?

    - by Martin
    When I'm about to write a short algorithm, I first check in the base class library I'm using whether the algorithm is implemented in it. If not, I often do a quick google search to see if someone has done it before (which is the case, 19 times out of 20). Most of the time, I find the exact code I need. Sometimes it's clear what license applies to the source code, sometimes not. It may be GPL, LGPL, BSD or whatever. Sometimes people have posted a code snippet on some random forum which solves my problem. It's clear to me that I can't reuse the code (copy/paste it into my code) without caring about the license if the code is in some way substantial. What is not clear to me is whether I can copy a code snippet containing 5 lines or so without doing a license violation. Can I copy/paste a 5-line code snippet without caring about the license? What about one-liner? What about 10 lines? Where do I draw the line (no pun intended)? My second problem is that if I have found a 10-line code snippet which does exactly what I need, but feel that I cannot copy it because it's GPL-licensed and my software isn't, I have already memorized how to implement it so when I go around implementing the same functionality, my code is almost identical to the GPL licensed code I saw a few minutes ago. (In other words, the code was copied to my brain and my brain after that copied it into my source code).

    Read the article

  • Easiest way to find the correct kademlia bucket

    - by Martin
    In the Kademlia protocol node IDs are 160 bit numbers. Nodes are stored in buckets, bucket 0 stores all the nodes which have the same ID as this node except for the very last bit, bucket 1 stores all the nodes which have the same ID as this node except for the last 2 bits, and so on for all 160 buckets. What's the fastest way to find which bucket I should put a new node into? I have my buckets simply stored in an array, and need a method like so: Bucket[] buckets; //array with 160 items public Bucket GetBucket(Int160 myId, Int160 otherId) { //some stuff goes here } The obvious approach is to work down from the most significant bit, comparing bit by bit until I find a difference, I'm hoping there is a better approach based around clever bit twiddling. Practical note: My Int160 is stored in a byte array with 20 items, solutions which work well with that kind of structure will be preferred.

    Read the article

  • Function overloading by return type?

    - by dsimcha
    Why don't more mainstream statically typed languages support function/method overloading by return type? I can't think of any that do. It seems no less useful or reasonable than supporting overload by parameter type. How come it's so much less popular?

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >