Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 17/331 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Parallelizing a serial algorithm

    - by user643813
    Hej folks, I am working on porting a Text mining/Natural language application from single-core to a Map-Reduce style system. One of the steps involves a while loop similar to this: Queue<Element>; while (!queue.empty()) { Element e = queue.next(); Set<Element> result = calculateResultSet(e); if (!result.empty()) { queue.addAll(result); } } Each iteration depends on the result of the one before (kind of). There is no way of determining the number of iterations this loop will have to perform. Is there a way of parallelizing a serial algorithm such as this one? I am trying to think of a feedback mechanism, that is able to provide its own input, but how would one go about parallelizing it? Thanks for any help/remarks

    Read the article

  • elastic / snaking line algorithm

    - by vhdirk
    Hi everyone I am making a graphics application in which I can edit a polyline by dragging the control point of it. However, I'd like to make it a bit easier to use by making it elastic; When dragging a control point, instead of moving a single point, I'd like the points within a certain distance of that point to be moved as well, depending on how hard the control point is 'pulled'. Does anyone know a simple algorithm for this? It may be quite rudimentary, as the primary requirement is speed. Actually, knowing how to call such behaviour would also be nice, so I can look it up on google. I tried 'snaking' line, but that seems to refer to active contours, which isn't what I'm looking for. Thanks

    Read the article

  • algorithm to combine data for linear fit?

    - by BoldlyBold
    I'm not sure if this is the best place to ask this, but you guys have been helpful with plenty of my CS homework in the past so I figure I'll give it a shot. I'm looking for an algorithm to blindly combine several dependent variables into an index that produces the best linear fit with an external variable. Basically, it would combine the dependent variables using different mathematical operators, include or not include each one, etc. until an index is developed that best correlates with my external variable. Has anyone seen/heard of something like this before? Even if you could point me in the right direction or to the right place to ask, I would appreciate it. Thanks.

    Read the article

  • Algorithm of JavaScript "sort()" Function

    - by Knowledge Craving
    Recently when I was working with JavaScript "sort()" function, I found in one of the tutorials that this function does not sort the numbers properly. Instead to sort numbers, a function must be added that compares numbers, like the following code:- <script type="text/javascript"> function sortNumber(a,b) { return a - b; } var n = ["10", "5", "40", "25", "100", "1"]; document.write(n.sort(sortNumber)); </script> The output then comes as:- 1,5,10,25,40,100 Now what I didn't understand is that why is this occurring & can anybody please tell in details as to what type of algorithm is being used in this "sort()" function? This is because for any other language, I didn't find this problem where the function didn't sort the numbers correctly. Any help is greatly appreciated.

    Read the article

  • Issues in Convergence of Sequential minimal optimization for SVM

    - by Amol Joshi
    I have been working on Support Vector Machine for about 2 months now. I have coded SVM myself and for the optimization problem of SVM, I have used Sequential Minimal Optimization(SMO) by Mr. John Platt. Right now I am in the phase where I am going to grid search to find optimal C value for my dataset. ( Please find details of my project application and dataset details here http://stackoverflow.com/questions/2284059/svm-classification-minimum-number-of-input-sets-for-each-class) I have successfully checked my custom implemented SVM`s accuracy for C values ranging from 2^0 to 2^6. But now I am having some issues regarding the convergence of the SMO for C 128. Like I have tried to find the alpha values for C=128 and it is taking long time before it actually converges and successfully gives alpha values. Time taken for the SMO to converge is about 5 hours for C=100. This huge I think ( because SMO is supposed to be fast. ) though I`m getting good accuracy? I am screwed right not because I can not test the accuracy for higher values of C. I am actually displaying number of alphas changed in every pass of SMO and getting 10, 13, 8... alphas changing continuously. The KKT conditions assures convergence so what is so weird happening here? Please note that my implementation is working fine for C<=100 with good accuracy though the execution time is long. Please give me inputs on this issue. Thank You and Cheers.

    Read the article

  • Am I understanding premature optimization correctly?

    - by Ed Mazur
    I've been struggling with an application I'm writing and I think I'm beginning to see that my problem is premature optimization. The perfectionist side of me wants to make everything optimal and perfect the first time through, but I'm finding this is complicating the design quite a bit. Instead of writing small, testable functions that do one simple thing well, I'm leaning towards cramming in as much functionality as possible in order to be more efficient. For example, I'm avoiding multiple trips to the database for the same piece of information at the cost of my code becoming more complex. One part of me wants to just not worry about redundant database calls. It would make it easier to write correct code and the amount of data being fetched is small anyway. The other part of me feels very dirty and unclean doing this. :-) I'm leaning towards just going to the database multiple times, which I think is the right move here. It's more important that I finish the project and I feel like I'm getting hung up because of optimizations like this. My question is: is this the right strategy to be using when avoiding premature optimization?

    Read the article

  • ASP.NET Web Optimization - confusion about loading order

    - by Ciel
    Using the ASP.NET Web Optimization Framework, I am attempting to load some javascript files up. It works fine, except I am running into a peculiar situation with either the loading order, the loading speed, or its execution. I cannot figure out which. Basically, I am using ace code editor for javascript, and I also want to include its autocompletion package. This requires two files. /ace.js /ext-language_tools.js This isn't an issue, if I load both of these files the normal way (with <script> tags) it works fine. But when I try to use the web optimization bundles, it seems as if something goes wrong. Trying this out... bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") .Include("~/js/ext-language_tools.js") }); and then in the view .. @Scripts.Render("~/bundles/js") I get the error ace is not defined This means that the ace.js file hasn't run, or hasn't loaded. Because if I break it apart into two bundles, it starts working. bundles.Add(new ScriptBundle("~/bundles/js") { .Include("~/js/ace.js") }); bundles.Add(new ScriptBundle("~/bundles/js/language_tools") { .Include("~/js/ext-language_tools.js") }); Can anyone explain why this would behave in this fashion?

    Read the article

  • Memory optimization while downloading

    - by lboregard
    hello all i have the following piece of code, that im looking forward to optimize, since i'm consuming gobs of memory this routine is heavily used first optimization would be to move the stringbuilder construction out of the download routine and make it a field of the class, then i would clear it inside the routine can you please suggest any other optimization or point me in the direction of some resources that could help me with this (web articles, books, etc). i'm thinking about replacing the stringbuilder by a fixed (much larger) size buffer ... or perhaps create a larger sized stringbuilder thanks in advance. StreamWriter _writer; StreamReader _reader; public string Download(string msgId) { _writer.WriteLine("BODY <" + msgId + ">"); string response = _reader.ReadLine(); if (!response.StartsWith("222")) return null; bool done = false; StringBuilder body = new StringBuilder(256* 1024); do { response = _reader.ReadLine(); if (OnProgress != null) OnProgress(response.Length); if (response == ".") { done = true; } else { if (response.StartsWith("..")) response = response.Remove(0, 1); body.Append(response); body.Append("\r\n"); } } while (!done); return body.ToString(); }

    Read the article

  • How to optimize shopping carts for minimal prices?

    - by tangens
    I have a list of items I want to buy. The items are offered by different shops and different prices. The shops have individual delivery costs. I'm looking for an optimal shopping strategy (and a java library supporting it) to purchase all of the items with a minimal total price. Example: Item1 is offered at Shop1 for $100, at Shop2 for $111. Item2 is offered at Shop1 for $90, at Shop2 for $85. Delivery cost of Shop1: $10 if total order < $150; $0 otherwise Delivery cost of Shop2: $5 if total order < $50; $0 otherwise If I buy Item1 and Item2 at Shop1 the total cost is $100 + $90 +$0 = $190. If I buy Item1 and Item2 at Shop2 the total cost is $111 + $85 +$0 = $196. If I buy Item1 at Shop1 and Item2 at Shop2 the total cost is $100 + $10 + $85 + $0 = 195. I get the minimal price if I order Item1 at Shop1 and Item2 at Shop2: $195 Question I need some hints which algorithms may help me to solve optimization problems of this kind for number of items about 100 and number of shops about 20. I already looked at apache-math and its optimization package, but I have no idea what algorithm to look for.

    Read the article

  • optimization math computation (multiplication and summing)

    - by wiso
    Suppose you want to compute the sum of the square of the differences of items: $\sum_{i=1}^{N-1} (x_i - x_{i+1})^2$, the simplest code (the input is std::vector<double> xs, the ouput sum2) is: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (prev - (*i)) * (prev - (*i)); // only 1 - with compiler optimization prev = (*i); } I hope that the compiler do the optimization in the comment above. If N is the length of xs you have N-1 multiplications and 2N-3 sums (sums means + or -). Now suppose you know this variable: sum = $x_1^2 + x_N^2 + 2 sum_{i=2}^{N-1} x_i^2$ Expanding the binomial square: $sum_i^{N-1} (x_i-x_{i+1})^2 = sum - 2\sum_{i=1}^{N-1} x_i x_{i+1}$ so the code becomes: double sum2 = 0.; double prev = xs[0]; for (vector::const_iterator i = xs.begin() + 1; i != xs.end(); ++i) { sum2 += (*i) * prev; prev = (*i); } sum2 = -sum2 * 2. + sum; Here I have N multiplications and N-1 additions. In my case N is about 100. Well, compiling with g++ -O2 I got no speed up (I try calling the inlined function 2M times), why?

    Read the article

  • Render rivers in a grid.

    - by Gabriel A. Zorrilla
    I have created a random height map and now i want to create rivers. I've made an algorithm based on a* to make rivers flow from peaks to sea and now i'm in the quest of figuring out an elegant algorithm to render them. It's a 2D, square, mapgrid. The cells which the river pases has a simple integer value with this form :rivernumber && pointOrder. Ie: 10, 11, 12, 13, 14, 15, 16...1+N for the first river, 20,21,22,23...2+N for the second, etc. This is created in the map grid generation time and it's executed just once, when the world is generated. I wanted to treat each river as a vector, but there is a problem, if the same river has branches (because i put some noise to generate branches), i can not just connect the points in order. The second alternative is to generate a complex algorithm where analizes each point, checks if the next is not a branch, if so trigger another algorithm that take care of the branch then returns to the main river, etc. Very complex and inelegant. Perhaps there is a solution in the world generation algorithm or in the river rendering algorithm that is commonly used in these cases and i'm not aware of. Any tips? Thanks!!

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • Is there a perfect algorithm for chess?

    - by Overflown
    Dear Stack Overflow community, I was recently in a discussion with a non-coder person on the possibilities of chess computers. I'm not well versed in theory, but think I know enough. I argued that there could not exist a deterministic Turing machine that always won or stalemated at chess. I think that, even if you search the entire space of all combinations of player1/2 moves, the single move that the computer decides upon at each step is based on a heuristic. Being based on a heuristic, it does not necessarily beat ALL of the moves that the opponent could do. My friend thought, to the contrary, that a computer would always win or tie if it never made a "mistake" move (however do you define that?). However, being a programmer who has taken CS, I know that even your good choices - given a wise opponent - can force you to make "mistake" moves in the end. Even if you know everything, your next move is greedy in matching a heuristic. Most chess computers try to match a possible end game to the game in progress, which is essentially a dynamic programming traceback. Again, the endgame in question is avoidable though. -- thanks, Allan Edit: Hmm... looks like I ruffled some feathers here. That's good. Thinking about it again, it seems like there is no theoretical problem with solving a finite game like chess. I would argue that chess is a bit more complicated than checkers in that a win is not necessarily by numerical exhaustion of pieces, but by a mate. My original assertion is probably wrong, but then again I think I've pointed out something that is not yet satisfactorily proven (formally). I guess my thought experiment was that whenever a branch in the tree is taken, then the algorithm (or memorized paths) must find a path to a mate (without getting mated) for any possible branch on the opponent moves. After the discussion, I will buy that given more memory than we can possibly dream of, all these paths could be found.

    Read the article

  • Accurate least-squares fit algorithm needed

    - by ggkmath
    I've experimented with the two ways of implementing a least-squares fit (LSF) algorithm shown here. The first code is simply the textbook approach, as described by Wolfram's page on LSF. The second code re-arranges the equation to minimize machine errors. Both codes produce similar results for my data. I compared these results with Matlab's p=polyfit(x,y,1) function, using correlation coefficients to measure the "goodness" of fit and compare each of the 3 routines. I observed that while all 3 methods produced good results, at least for my data, Matlab's routine had the best fit (the other 2 routines had similar results to each other). Matlab's p=polyfit(x,y,1) function uses a Vandermonde matrix, V (n x 2 matrix) and QR factorization to solve the least-squares problem. In Matlab code, it looks like: V = [x1,1; x2,1; x3,1; ... xn,1] % this line is pseudo-code [Q,R] = qr(V,0); p = R\(Q'*y); % performs same as p = V\y I'm not a mathematician, so I don't understand why it would be more accurate. Although the difference is slight, in my case I need to obtain the slope from the LSF and multiply it by a large number, so any improvement in accuracy shows up in my results. For reasons I can't get into, I cannot use Matlab's routine in my work. So, I'm wondering if anyone has a more accurate equation-based approach recommendation I could use that is an improvement over the above two approaches, in terms of rounding errors/machine accuracy/etc. Any comments appreciated! thanks in advance.

    Read the article

  • Algorithm to get through a maze

    - by Sam
    Hello, We are currently programming a game (its a pretty unknown language: modula 2), And the problem we encountered is the following: we have a maze (not a perfect maze) in a 17 x 12 grid. The computer has to generate a way from the starting point (9, 12) to the end point (9, 1). I found some algorithms but they dont work when the robot has to go back: xxxxx x => x x xxx or: xxxxx x xxxxxx x x x x x xxxxxx x => x xxxxxxxxx I found a solution for the first example type but then the second type couldn't be solved and the solution I made up for the second type would cause the robot to get stuck in the first type of situation. It's a lot of code so i'll give the idea: WHILE (end destination not reached) DO { try to go right, if nothing blocks you: go right if you encounter a block, try up until you can go right, if you cant go up anymore try going down until you can go right, (starting from the place you first were blocked), if you cant go down anymore, try one step left and fill the spaces you tested with blocks. } This works for the first type of problem ... not for the second one. Now it could be that i started wrong so i am open for better algorithms or solutions specificaly to how i could improve my algorithm. Thanks a lot!!

    Read the article

  • Unexpected result in C algebra for search algorithm.

    - by Rhys
    Hi, I've implemented this search algorithm for an ordered array of integers. It works fine for the first data set I feed it (500 integers), but fails on longer searches. However, all of the sets work perfectly with the other four search algorithms I've implemented for the assignment. This is the function that returns a seg fault on line 178 (due to an unexpected negative m value). Any help would be greatly appreciated. CODE: 155 /* perform Algortihm 'InterPolationSearch' on the set 156 * and if 'key' is found in the set return it's index 157 * otherwise return -1 */ 158 int 159 interpolation_search(int *set, int len, int key) 160 { 161 int l = 0; 162 int r = len - 1; 163 int m; 164 165 while (set[l] < key && set[r] >= key) 166 { 167 168 printf ("m = l + ((key - set[l]) * (r - l)) / (set[r] - set[l])\n"); 169 170 printf ("m = %d + ((%d - %d) * (%d - %d)) / (%d - %d);\n", l, key, set[l], r, l, set[r], set[l]); 171 m = l + ((key - set[l]) * (r - l)) / (set[r] - set[l]); 172 printf ("m = %d\n", m); 173 174 #ifdef COUNT_COMPARES 175 g_compares++; 176 #endif 177 178 if (set[m] < key) 179 l = m + 1; 180 else if (set[m] > key) 181 r = m - 1; 182 else 183 return m; 184 } 185 186 if (set[l] == key) 187 return l; 188 else 189 return -1; 190 } OUTPUT: m = l + ((key - set[l]) * (r - l)) / (set[r] - set[l]) m = 0 + ((68816 - 0) * (100000 - 0)) / (114836 - 0); m = -14876 Thankyou! Rhys

    Read the article

  • Resizing image algorithm in python

    - by hippocampus
    So, I'm learning my self python by this tutorial and I'm stuck with exercise number 13 which says: Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1. This is not meant as a question about PIL, but to ask which algorithm to use so I can code it myself. I've found some similar questions like this, but I dunno how to translate this into python. Any help would be appreciated. I've come to this: import image win = image.ImageWin() img = image.Image("cy.png") factor = 2 W = img.getWidth() H = img.getHeight() newW = int(W*factor) newH = int(H*factor) newImage = image.EmptyImage(newW, newH) for col in range(newW): for row in range(newH): p = img.getPixel(col,row) newImage.setPixel(col*factor,row*factor,p) newImage.draw(win) win.exitonclick() I should do this in a function, but this doesn't matter right now. Arguments for function would be (image, factor). You can try it on OP tutorial in ActiveCode. It makes a stretched image with empty columns :.

    Read the article

  • Video-codec rater by image comparison algorithm?

    - by Andreas Hornig
    Hi, perhaps anyone knows if this is possible. comparing image quality is almost imposible to describe without subjective influences. When someone rates an image quality as good there is at least one person, that doesn't think so. human preferences are always different. So, I would like to know if there is away to "rate" the image quality by an algorithm that compares the original image to the produced one in following issues colour change(difference pixel by pixel blur rate artifacts and macroblocking the first one would be the easiest one because you could check just the diffeence in colours and can give 3 values in +- of each hex-value both last once I don't know if this is possible, but the blocking could be detected by edge-finding. and the king's quest would be to do that for more then just one image, because video is done with several frames. perhaps you expert programmers could tell me, if such an automated algo can be done to bring some objective measurement divice into rating image quality. this could perhaps calm down some h.264 is better than x264 and better than vp8 and blaaah people :) Andreas 1st posted here http://www.hdtvtotal.com/index.php?name=PNphpBB2&file=viewtopic&p=9705

    Read the article

  • Bitap algorithm in Java [closed]

    - by davit-datuashvili
    The following is the bitap algorithm according to Wikipedia. Can someone translate this to Java? #include <string.h> #include <limits.h> const char *bitap_bitwise_search(const char *text, const char *pattern) { int m = strlen(pattern); unsigned long R; unsigned long pattern_mask[CHAR_MAX+1]; int i; if (pattern[0] == '\0') return text; if (m > 31) return "The pattern is too long!"; /* Initialize the bit array R */ R = ~1; /* Initialize the pattern bitmasks */ for (i=0; i <= CHAR_MAX; ++i) pattern_mask[i] = ~0; for (i=0; i < m; ++i) pattern_mask[pattern[i]] &= ~(1UL << i); for (i=0; text[i] != '\0'; ++i) { /* Update the bit array */ R |= pattern_mask[text[i]]; R <<= 1; if (0 == (R & (1UL << m))) return (text + i - m) + 1; } return NULL; }

    Read the article

  • Average performance of binary search algorithm?

    - by Passionate Learner
    http://en.wikipedia.org/wiki/Binary_search_algorithm#Average_performance BinarySearch(int A[], int value, int low, int high) { int mid; if (high < low) return -1; mid = (low + high) / 2; if (A[mid] > value) return BinarySearch(A, value, low, mid-1); else if (A[mid] < value) return BinarySearch(A, value, mid+1, high); else return mid; } If the integer I'm trying to find is always in the array, can anyone help me write a program that can calculate the average performance of binary search algorithm? I know I can do this by actually running the program and counting the number of calls, but what I'm trying to do here is to do it without calling the function. I'm not asking for a time complexity, I'm trying to calculate the average number of calls. For example, the average number of calls to find a integer in A[2], it would be 1.67 (5/3).

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • Algorithm to determine which points should be visible on a map based on zoom

    - by lgratian
    Hi! I'm making a Google Maps-like application for a course at my Uni (not something complex, it should load the map of a city for example, not the whole world). The map can have many layers, including markers (restaurants, hospitals, etc.) The problem is that when you have many points and you zoom out the map it doesn't look right. At this zoom level only some points need to be visible (and at the maximum map size, all points). The question is: how can you determine which points should be visible for a specified zoom level? Because I have implemented a PR Quadtree to speed up rendering I thought that I could define some "high-priority" markers (that are always visible, defined in the map editor) and put them in a queue. At each step a marker is removed from the queue and all it's neighbors that are at least D units away (D depends on the zoom levels) are chosen and inserted in the queue, and so on. Is there any better way than the algorithm I thought of? Thanks in advance!

    Read the article

  • Number Algorithm

    - by James
    I've been struggling to wrap my head around this for some reason. I have 15 bits that represent a number. The bits must match a pattern. The pattern is defined in the way the bits start out: they are in the most flush-right representation of that pattern. So say the pattern is 1 4 1. The bits will be: 000000010111101 So the general rule is, take each number in the pattern, create that many bits (1, 4 or 1 in this case) and then have at least one space separating them. So if it's 1 2 6 1 (it will be random): 001011011111101 Starting with the flush-right version, I want to generate every single possible number that meets that pattern. The # of bits will be stored in a variable. So for a simple case, assume it's 5 bits and the initial bit pattern is: 00101. I want to generate: 00101 01001 01010 10001 10010 10100 I'm trying to do this in Objective-C, but anything resembling C would be fine. I just can't seem to come up with a good recursive algorithm for this. It makes sense in the above example, but when I start getting into 12431 and having to keep track of everything it breaks down.

    Read the article

  • Algorithm to see if keywords exist inside a string

    - by rksprst
    Let's say I have a set of keywords in an array {"olympics", "sports tennis best", "tennis", "tennis rules"} I then have a large list (up to 50 at a time) of strings (or actually tweets), so they are a max of 140 characters. I want to look at each string and see what keywords are present there. In the case where a keyword is composed of multiple words like "sports tennis best", the words don't have to be together in the string, but all of them have to show up. I've having trouble figuring out an algorithm that does this efficiently. Do you guys have suggestions on a way to do this? Thanks! Edit: To explain a bit better each keyword has an id associated with it, so {1:"olympics", 2:"sports tennis best", 3:"tennis", 4:"tennis rules"} I want to go through the list of strings/tweets and see which group of keywords match. The output should be, this tweet belongs with keyword #4. (multiple matches may be made, so anything that matches keyword 2, would also match 3 -since they both contain tennis). When there are multiple words in the keyword, e.g. "sports tennis best" they don't have to appear together but have to all appear. e.g. this will correctly match: "i just played tennis, i love sports, its the best"... since this string contains "sports tennis best" it will match and be associated with the keywordID (which is 2 for this example).

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >