Search Results

Search found 6716 results on 269 pages for 'distributed algorithm'.

Page 99/269 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • What are the advantages of a rebase over a merge in git?

    - by eSKay
    In this article, the author explains rebasing with this diagram: Rebase: If you have not yet published your branch, or have clearly communicated that others should not base their work on it, you have an alternative. You can rebase your branch, where instead of merging, your commit is replaced by another commit with a different parent, and your branch is moved there. while a normal merge would have looked like this: So, if you rebase, you are just losing a history state (which would be garbage collected sometime in the future). So, why would someone want to do a rebase at all? What am I missing here?

    Read the article

  • in-place permutation of a array follows this rule

    - by Mgccl
    Suppose there is an array, we want to find everything in the odd index, and move it to the end. Everything in the even index move it to the beginning. The relative order of all odd index items and all even index items are preserved. Suppose the values of the array, a[i] = i, n is even. Then we have. 0,1,2,3,4,5,...,n-1 after the operation 0,2,4,6,...,n-2,1,3,5,7,...,n-1 Can this be done in-place and in O(n) time?

    Read the article

  • How do LL(*) parsers work?

    - by freezer878
    I cannot find any complete description about LL(*) parser, such as ANTLR, on Internet. I'm wondering what is the difference between an LL(k) parser and an LL(*) one and why they can't support left-recusrive grammars despite their flexibility.

    Read the article

  • Tracking/Counting Word Frequency

    - by Joel Martinez
    I'd like to get some community consensus on a good design to be able to store and query word frequency counts. I'm building an application in which I have to parse text inputs and store how many times a word has appeared (over time). So given the following inputs: "To Kill a Mocking Bird" "Mocking a piano player" Would store the following values: Word Count ------------- To 1 Kill 1 A 2 Mocking 2 Bird 1 Piano 1 Player 1 And later be able to quickly query for the count value of a given arbitrary word. My current plan is to simply store the words and counts in a database, and rely on caching word count values ... But I suspect that I won't get enough cache hits to make this a viable solution long term. Can anyone suggest algorithms, or data structures, or any other idea that might make this a well-performing solution?

    Read the article

  • Finding patterns in Puzzle games.

    - by José Joel.
    I was wondering, which are the most commonly used algorithms applied to finding patterns in puzzle games conformed by grids of cells. I know that depends of many factors, like the kind of patterns You want to detect, or the rules of the game...but I wanted to know which are the most commonly used algorithms in that kind of problems... For example, games like columns, bejeweled, even tetris. I also want to know if detecting patterns by "brute force" ( like , scanning all the grid trying to find three adyacent cells of the same color ) is significantly worst that using particular algorithms in very small grids, like 4 X 4 for example ( and again, I know that depends of the kind of game and rules ...) Which structures are commonly used in this kind of games ?

    Read the article

  • Given a word, how do I get the list of all words, that differ by one letter?

    - by user187809
    Let's say I have the word "CAT". These words differ from "CAT" by one letter (not the full list) CUT CAP PAT FAT COT etc. Is there an elegant way to generate this? Obviously, one way to do it is through brute force. pseduo code: while (0 to length of word) while (A to Z) replace one letter at a time, and check if the resulting word is a valid word If I had a 10 letter word, the loop would run 26 * 10 = 260 times. Is there a better, elegant way to do this?

    Read the article

  • Big-O for Eight Year Olds?

    - by Jason Baker
    I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the amount of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here? Like what exactly would an O(n^2) operation do? And what the heck does it mean if an operation is O(n log(n))? And does somebody have to smoke crack to write an O(x!)?

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • Find the centroid of a polygon with weighted vertices

    - by Calle Kabo
    Hi, I know how to find the centroid (center of mass) of a regular polygon. This assumes that every part of the polygon weighs the same. But how do I calculate the centroid of a weightless polygon (made from aerogel perhaps :), where each vertex has a weight? Simplified illustration of what I mean using straight line: 5kg-----------------5kg ^center of gravity 10kg---------------5kg ^center of gravity offset du to weight of vertices Of course, I know how to calculate the center of gravity on a straight line with weighted vertices, but how do I do it on a polygon with weighted vertices? Thanks for your time!

    Read the article

  • How to do "map chunks", like terraria or minecraft maps?

    - by O'poil
    Due to performance issues, I have to cut my maps into chunks. I manage the maps in this way: listMap[x][y] = new Tile (x,y); I tried in vain to cut this list for several "chunk" to avoid loading all the map because the fps are not very high with large map. And yet, when I update or Draw I do it with a little tile range. Here is how I proceed: foreach (List<Tile> list in listMap) { foreach (Tile leTile in list) { if ((leTile.Position.X < screenWidth + hero.Pos.X) && (leTile.Position.X > hero.Pos.X - tileSize) && (leTile.Position.Y < screenHeight + hero.Pos.Y) && (leTile.Position.Y > hero.Pos.Y - tileSize) ) { leTile.Draw(spriteBatch, gameTime); } } } (and the same thing, for the update method). So I try to learn with games like minecraft or terraria, and any two manages maps much larger than mine, without the slightest drop of fps. And apparently, they load "chunks". What I would like to understand is how to cut my list in Chunk, and how to display depending on the position of my character. I try many things without success. Thank you in advance for putting me on the right track! Ps : Again, sorry for my English :'( Pps : I'm not an experimented developer ;)

    Read the article

  • How to determine item at a index in pattern

    - by el.gringogrande
    I have the following elements in a list/array a1,a2,a3 and these elements are used to build another list in a predictable pattern example a1,a1,a2,a2,a3,a3,a1,a1,a2,a2,a3,a3... The pattern may change but I will always know how many times each element repeats and all elements repeats the same number of times. And the elements always show up in the same order. so another pattern might be a1,a1,a1,a2,a2,a2,a3,a3,a3,a1,a1,a1,a2,a2,a2,a3,a3,a3... or a1,a2,a3,a1,a2,a3 it will never be a2,a2,a1,a1,a3,a3... or a1,a2,a3,a2,a3,a1 etc How I determine what element is at any index in the list?

    Read the article

  • Using c#,c/c++ or java to improve BBN with GA

    - by madicemickael
    I have a little problem in my little project , I wish that someone here could help me! I am planning to use a bayesian network as a decision factor in my game AI and I want to improve the decision making every step of the way , anyone knows how to do that ? Any tutorials / existing implementations will be very good,I hope some of you could help me. I heard that a programmer in this community did a good implementation of this put together for poker game AI.I am planning to use it like him ,but in another poker(Texas) or maybe Rentz. Looking for C/c++ or c# or java code. Thanks , Mike

    Read the article

  • Space requirements of a merge-sort

    - by Arkaitz Jimenez
    I'm trying to understand the space requirements for a Mergesort, O(n). I see that time requirements are basically, amount of levels(logn) * merge(n) so that makes (n log n). Now, we are still allocating n per level, in 2 different arrays, left and right. I do understand that the key here is that when the recursive functions return the space gets deallocated, but I'm not seeing it too obvious. Besides, all the info I find, just states space required is O(n) but don't explain it. Any hint? function merge_sort(m) if length(m) = 1 return m var list left, right, result var integer middle = length(m) / 2 for each x in m up to middle add x to left for each x in m after middle add x to right left = merge_sort(left) right = merge_sort(right) result = merge(left, right) return result

    Read the article

  • what are the recent dataStructure and algorithms that one should know?

    - by Shamik
    Recently I came across the SkipList data structure. It really helped me to solve one otherwise critical problem to be solved. I was struggling to solve the same problem with Balanced Binary tree but it became very complex as the tree needs to be always balanced and I wanted to know the existence of not only a particular value but values in certain range. SkipList helped me to solve that problem effectively. I am wondering what else data structures that I need to know? I know - Array, List, Stack, Queue, Linked List, hashtable, tree and its different forms like B-tree, Trie etc. Would like to know if you find some other data structure/concept very interesting to know yet effective enough to be used in a daily development cycle.

    Read the article

  • arbitrary vire connection / search and replace

    - by fatai
    input :["vire_connection",[1, 2, [ 3, [ 4, "connect"]]], ["connect", [3 , 5] ] ] output:["vire_connection",[ 1, 2, [ 3, [ 4, [ 3, 5 ] ] ] ] ], [ [ 3 , 5] ] ] after connection ( simply copying [3,5] to other wanted position ) , remove connect word input :["vire_connection", [ [ [ ["connect", [ 3, 4 ] ] ] ] ], [ 2, "connect"]] output :["vire_connection",[[[[[3,4]]]]], [ 2, [ 3 , 4 ]]] after connection ( simply copying [3,4] to other wanted position ) , remove connect word how can I do ?

    Read the article

  • A quick way to map unordered list of longs to buffer location ?

    - by alhazen
    I have a large number of points (indexed by long) that are processed by multiple threads and I'm using a buffer to hold the output results in order. As the number of points processed is huge, what would be an efficient way to map the indexes of the points to the corresponding ordered position in the buffer ? Example: long bufferIndex bufferIndex index (if BufferSize = 2) (if BufferSize = 4) ---------------------------------------------- 2938 0 0 2939 1 1 2941 1 3 2940 0 2 Thanks.

    Read the article

  • An O(1) Sort ~~~

    - by FlySwat
    Before you stone me for being a heretic, There is a sort that proclaims to be O(1), called "Bead Sort" (http://en.wikipedia.org/wiki/Bead_sort) , however that is pure theory, when actually applied I found that it was actually O(N * M), which is pretty pathetic. That said, Lets list out some of the fastest sorts, and their best case and worst case speed in Big O notation. ~~ FlySwat ~~

    Read the article

  • What Software Engineering Areas should be stressed upon while Interviewing Candidate for Fulltime So

    - by Rachel
    Hi, This question is somewhat related to other posts which I found on Stackoverflow but not exactly and so am prompted to ask about it. I know we must ask for Data-Structures and Algorithms but what specific data-structures or Algorithms or other CS Concepts should be asked while interviewing Sr. Software Engineering Fulltime Position as compared with Software Engineering Position. Thanks.

    Read the article

  • Unit Testing - Algorithm or Sample based ?

    - by ohadsc
    Say I'm trying to test a simple Set class public IntSet : IEnumerable<int> { Add(int i) {...} //IEnumerable implementation... } And suppose I'm trying to test that no duplicate values can exist in the set. My first option is to insert some sample data into the set, and test for duplicates using my knowledge of the data I used, for example: //OPTION 1 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //3 will be added 3 times var values = new List<int> {1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I know 3 is the only candidate to appear multiple times int counter = 0; foreach (int i in set) if (i == 3) counter++; Assert.AreEqual(1, counter); } My second option is to test for my condition generically: //OPTION 2 void InsertDuplicateValues_OnlyOneInstancePerValueShouldBeInTheSet() { var set = new IntSet(); //The following could even be a list of random numbers with a duplicate var values = new List<int> { 1, 2, 3, 3, 3, 4, 5}; foreach (int i in values) set.Add(i); //I am not using my prior knowledge of the sample data //the following line would work for any data CollectionAssert.AreEquivalent(new HashSet<int>(values), set); } Of course, in this example, I conveniently have a set implementation to check against, as well as code to compare collections (CollectionAssert). But what if I didn't have either ? This is the situation when you are testing your real life custom business logic. Granted, testing for expected conditions generically covers more cases - but it becomes very similar to implementing the logic again (which is both tedious and useless - you can't use the same code to check itself!). Basically I'm asking whether my tests should look like "insert 1, 2, 3 then check something about 3" or "insert 1, 2, 3 and check for something in general" EDIT - To help me understand, please state in your answer if you prefer OPTION 1 or OPTION 2 (or neither, or that it depends on the case, etc )

    Read the article

  • Determining the order of a list of numbers (possibly without sorting)

    - by Victor Liu
    I have an array of unique integers (e.g. val[i]), in arbitrary order, and I would like to populate another array (ord[i]) with the the sorted indexes of the integers. In other words, val[ord[i]] is in sorted order for increasing i. Right now, I just fill in ord with 0, ..., N, then sort it based on the value array, but I am wondering if we can be more efficient about it since ord is not populated to begin with. This is more of a question out of curiousity; I don't really care about the extra overhead from having to prepopulate a list and then sort it (it's small, I use insertion sort). This may be a silly question with an obvious answer, but I couldn't find anything online.

    Read the article

  • Garbage Collection in Java

    - by simion
    On the slides I am revising from it says the following: Live objects can be identified either by maintaining a count of the number of references to each object, or by tracing chains of references from the roots. Reference counting is expensive – it needs action every time a reference changes and it doesn’t spot cyclical structures, but it can reclaim space incrementally. Tracing involves identifying live objects only when you need to reclaim space – moving the cost from general access to the time at which the GC runs, typically only when you are out of memory. I understand the principles of why reference counting is expensive but do not understand what "doesn’t spot cyclical structures, but it can reclaim space incrementally." means. Could anyone help me out a little bit please? Thanks

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >