Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 105/300 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Given an even number of vertices, how to find an optimum set of pairs based on proximity?

    - by Alex Z
    The problem: We have a set of n vertices in 3D euclidean space, and there is an even number of these vertices. We want to pair them up based on their proximity. In other words, we'd like to be able to find a set of vertex pairs, where the vertices in each pair are as close as possible together. We want to minimise sacrificing the proximity between the vertices of any other pairs as much as possible in doing this. I am not looking for the most optimal solution (if it even strictly exists/can be done), just a reasonable one that can be computed relatively quickly. A relatively awful brute force approach involves choosing a vertex and looping through the rest to find its nearest neighbor and then repeating until there are none left. Of course as we near the end of the list the closest vertex could be very far away, but it is the only choice, therefore this can fail badly on the third point above.

    Read the article

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • Classical task-scheduling assignment

    - by Bruno
    I am working on a flight scheduling app (disclaimer: it's for a college project, so no code answers, please). Please read this question w/ a quantum of attention before answering as it has a lot of peculiarities :( First, some terminology issues: You have planes and flights, and you have to pair them up. For simplicity's sake, we'll assume that a plane is free as soon as the flight using it prior lands. Flights are seen as tasks: They have a duration They have dependencies They have an expected date/time for beginning Planes can be seen as resources to be used by tasks (or flights, in our terminology). Flights have a specific type of plane needed. e.g. flight 200 needs a plane of type B. Planes obviously are of one and only one specific type, e.g., Plane Airforce One is of type C. A "project" is the set of all the flights by an airline in a given time period. The functionality required is: Finding the shortest possible duration for a said project The earliest and latest possible start for a task (flight) The critical tasks, with basis on provided data, complete with identifiers of preceding tasks. Automatically pair up flights and planes, so as to get all flights paired up with a plane. (Note: the duration of flights is fixed) Get a Gantt diagram with the projects scheduling, in which all flights begin as early as possible, showing all previously referred data graphically (dependencies, time info, etc.) So the questions is: How in the world do I achieve this? Particularly: We are required to use a graph. What do the graph's edges and nodes respectively symbolise? Are we required to discard tasks to achieve the critical tasks set? If you could also recommend some algorithms for us to look up, that'd be great.

    Read the article

  • Given a word, how do I get the list of all words, that differ by one letter?

    - by user187809
    Let's say I have the word "CAT". These words differ from "CAT" by one letter (not the full list) CUT CAP PAT FAT COT etc. Is there an elegant way to generate this? Obviously, one way to do it is through brute force. pseduo code: while (0 to length of word) while (A to Z) replace one letter at a time, and check if the resulting word is a valid word If I had a 10 letter word, the loop would run 26 * 10 = 260 times. Is there a better, elegant way to do this?

    Read the article

  • An O(1) Sort ~~~

    - by FlySwat
    Before you stone me for being a heretic, There is a sort that proclaims to be O(1), called "Bead Sort" (http://en.wikipedia.org/wiki/Bead_sort) , however that is pure theory, when actually applied I found that it was actually O(N * M), which is pretty pathetic. That said, Lets list out some of the fastest sorts, and their best case and worst case speed in Big O notation. ~~ FlySwat ~~

    Read the article

  • Python - Compress Ascii String

    - by n0idea
    I'm looking for a way to compress an ascii-based string, any help? I need also need to decompress it. I tried zlib but with no help. What can I do to compress the string into lesser length? code: def compress(request): if request.POST: data = request.POST.get('input') if is_ascii(data): result = zlib.compress(data) return render_to_response('index.html', {'result': result, 'input':data}, context_instance = RequestContext(request)) else: result = "Error, the string is not ascii-based" return render_to_response('index.html', {'result':result}, context_instance = RequestContext(request)) else: return render_to_response('index.html', {}, context_instance = RequestContext(request))

    Read the article

  • Auto scale and rotate images

    - by Dave Jarvis
    Given: two images of the same subject matter; the images have the same resolution, colour depth, and file format; the images differ in size and rotation; and two lists of (x, y) co-ordinates that correlate the images. I would like to know: How do you transform the larger image so that it visually aligns to the second image? (Optional.) What are the minimum number of points needed to get an accurate transformation? (Optional.) How far apart do the points need to be to get an accurate transformation? The transformation would need to rotate, scale, and possibly shear the larger image. Essentially, I want to create (or find) a program that does the following: Input two images (e.g., TIFFs). Click several anchor points on the small image. Click the several corresponding anchor points on the large image. Transform the large image such that it maps to the small image by aligning the anchor points. This would help align pictures of the same stellar object. (For example, a hand-drawn picture from 1855 mapped to a photograph taken by Hubble in 2000.) Many thanks in advance for any algorithms (preferably Java or similar pseudo-code), ideas or links to related open-source software packages.

    Read the article

  • How can I solve the Log Pile wooden puzzle with a computer program?

    - by craig1410
    Can anyone suggest how to solve the Log Pile wooden puzzle using a computer program? See here to visualise the puzzle: http://www.puzzlethis.co.uk/products/madcow/the_log_pile.htm The picture only shows some of the pieces. The full set of 10 pieces are configured as follows with 1 representing a peg, -1 representing a hole and 0 representing neither a peg nor a hole. -1,1,0,-1,0 1,0,1,0,0 1,-1,1,0,0 -1,-1,0,0,-1 -1,1,0,1,0 0,1,0,0,1 1,0,-1,0,-1 0,-1,0,1,0 0,0,-1,1,-1 1,0,-1,0,0 The pieces can be interlocked in two layers of 5 pieces each with the top layer at 90 degrees to the bottom layer as shown in the above link. I have already created a solution to this problem myself using Java but I feel that it was a clumsy solution and I am interested to see some more sophisticated solutions. Feel free to either suggest a general approach or to provide a working program in the language of your choice. My approach was to use the numeric notation above to create an array of "Logs". I then used a combination/permutation generator to try all possible arrangements of the Logs until a solution was found where all the intersections equated to zero (ie. Peg to Hole, Hole to Peg or Blank to Blank). I used some speed-ups to detect the first failed intersection for a given permutation and move on to the next permutation. I hope you find this as interesting as I have. Thanks, Craig.

    Read the article

  • Another Nim's Game Variant

    - by Terry Smith
    Given N binary sequence Example : given one sequence 101001 means player 0 can only choose a position with 0 element and cut the sequence from that position {1,101,1010} player 1 can only choose a position with 1 element ans cut the sequence from that position {null,10,101000} now player 0 and player 1 take turn cutting the sequence, on each turn they can cut any one non-null sequence, if a player k can't make a move because there's no more k element on any sequence, he lose. Assume both player play optimally, who will win ? I tried to solve this problem with grundy but i'm unable to reduce the sequence to a grundy number because it both player don't have the same option to move. Can anyone give me a hint to solve this problem ?

    Read the article

  • Just for fun (C# and C++)...time yourself [closed]

    - by Ted
    Possible Duplicate: What is your solution to the FizzBuzz problem? OK guys this is just for fun, no flamming allowed ! I was reading the following http://www.codinghorror.com/blog/2007/02/why-cant-programmers-program.html and couldn't believe the following sentence... " I've also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution." For those that can't be bothered to read the article, the background is this: ....I set out to develop questions that can identify this kind of developer and came up with a class of questions I call "FizzBuzz Questions" named after a game children often play (or are made to play) in schools in the UK. An example of a Fizz-Buzz question is the following: Write a program that prints the numbers from 1 to 100. But for multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". SO I decided to test myself. I took 5 minutes in C++ and 3mins in c#! So just for fun try it and post your timings + language used! P.S NO UNIT TESTS REQUIRED, NO OUTSOURCING ALLOWED, SWITCH OFF RESHARPER! :-) P.S. If you'd like to post your source then feel free

    Read the article

  • Finding patterns in Puzzle games.

    - by José Joel.
    I was wondering, which are the most commonly used algorithms applied to finding patterns in puzzle games conformed by grids of cells. I know that depends of many factors, like the kind of patterns You want to detect, or the rules of the game...but I wanted to know which are the most commonly used algorithms in that kind of problems... For example, games like columns, bejeweled, even tetris. I also want to know if detecting patterns by "brute force" ( like , scanning all the grid trying to find three adyacent cells of the same color ) is significantly worst that using particular algorithms in very small grids, like 4 X 4 for example ( and again, I know that depends of the kind of game and rules ...) Which structures are commonly used in this kind of games ?

    Read the article

  • Is there a website to lookup already common code functions?

    - by pinnacler
    I'm sitting here writing a function that I'm positive has been written before, somewhere on earth. It's just too common to have not been attempted, and I'm wondering why I can't just go to a website and search for a function that I can then copy and paste into my project in 2 seconds, instead of wasting my day reinventing the wheel. Sure there are certain libraries you can use, but where do you find these libraries and when they are absent, is there a site like I'm describing?

    Read the article

  • .NET Performance: Deep Recursion vs Queue

    - by JeffN825
    I'm writing a component that needs to walk large object graphs, sometimes 20-30 levels deep. What is the most performant way of walking the graph? A. Enqueueing "steps" so as to avoid deep recursion or B. A DFS (depth first search) which may step many levels deep and have a "deep" stack trace at times. I guess the question I'm asking is: Is there a performance hit in .NET for doing a DFS that causes a "deep" stack trace? If so, what is the hit? And would I better better off with some BFS by means of queueing up steps that would have been handled recursively in a DFS? Sorry if I'm being unclear. Thanks.

    Read the article

  • How to compute palindrome from a stream of characters in sub-linear space/time?

    - by wrick
    I don't even know if a solution exists or not. Here is the problem in detail. You are a program that is accepting an infinitely long stream of characters (for simplicity you can assume characters are either 1 or 0). At any point, I can stop the stream (let's say after N characters were passed through) and ask you if the string received so far is a palindrome or not. How can you do this using less sub-linear space and/or time.

    Read the article

  • How to determine item at a index in pattern

    - by el.gringogrande
    I have the following elements in a list/array a1,a2,a3 and these elements are used to build another list in a predictable pattern example a1,a1,a2,a2,a3,a3,a1,a1,a2,a2,a3,a3... The pattern may change but I will always know how many times each element repeats and all elements repeats the same number of times. And the elements always show up in the same order. so another pattern might be a1,a1,a1,a2,a2,a2,a3,a3,a3,a1,a1,a1,a2,a2,a2,a3,a3,a3... or a1,a2,a3,a1,a2,a3 it will never be a2,a2,a1,a1,a3,a3... or a1,a2,a3,a2,a3,a1 etc How I determine what element is at any index in the list?

    Read the article

  • Big-O for Eight Year Olds?

    - by Jason Baker
    I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the amount of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here? Like what exactly would an O(n^2) operation do? And what the heck does it mean if an operation is O(n log(n))? And does somebody have to smoke crack to write an O(x!)?

    Read the article

  • What Software Engineering Areas should be stressed upon while Interviewing Candidate for Fulltime So

    - by Rachel
    Hi, This question is somewhat related to other posts which I found on Stackoverflow but not exactly and so am prompted to ask about it. I know we must ask for Data-Structures and Algorithms but what specific data-structures or Algorithms or other CS Concepts should be asked while interviewing Sr. Software Engineering Fulltime Position as compared with Software Engineering Position. Thanks.

    Read the article

  • Find the set of largest contiguous rectangles to cover multiple areas

    - by joelpt
    I'm working on a tool called Quickfort for the game Dwarf Fortress. Quickfort turns spreadsheets in csv/xls format into a series of commands for Dwarf Fortress to carry out in order to plot a "blueprint" within the game. I am currently trying to optimally solve an area-plotting problem for the 2.0 release of this tool. Consider the following "blueprint" which defines plotting commands for a 2-dimensional grid. Each cell in the grid should either be dug out ("d"), channeled ("c"), or left unplotted ("."). Any number of distinct plotting commands might be present in actual usage. . d . d c c d d d d c c . d d d . c d d d d d c . d . d d c To minimize the number of instructions that need to be sent to Dwarf Fortress, I would like to find the set of largest contiguous rectangles that can be formed to completely cover, or "plot", all of the plottable cells. To be valid, all of a given rectangle's cells must contain the same command. This is a faster approach than Quickfort 1.0 took: plotting every cell individually as a 1x1 rectangle. This video shows the performance difference between the two versions. For the above blueprint, the solution looks like this: . 9 . 0 3 2 8 1 1 1 3 2 . 1 1 1 . 2 7 1 1 1 4 2 . 6 . 5 4 2 Each same-numbered rectangle above denotes a contiguous rectangle. The largest rectangles take precedence over smaller rectangles that could also be formed in their areas. The order of the numbering/rectangles is unimportant. My current approach is iterative. In each iteration, I build a list of the largest rectangles that could be formed from each of the grid's plottable cells by extending in all 4 directions from the cell. After sorting the list largest first, I begin with the largest rectangle found, mark its underlying cells as "plotted", and record the rectangle in a list. Before plotting each rectangle, its underlying cells are checked to ensure they are not yet plotted (overlapping a previous plot). We then start again, finding the largest remaining rectangles that can be formed and plotting them until all cells have been plotted as part of some rectangle. I consider this approach slightly more optimized than a dumb brute-force search, but I am wasting a lot of cycles (re)calculating cells' largest rectangles and checking underlying cells' states. Currently, this rectangle-discovery routine takes the lion's share of the total runtime of the tool, especially for large blueprints. I have sacrificed some accuracy for the sake of speed by only considering rectangles from cells which appear to form a rectangle's corner (determined using some neighboring-cell heuristics which aren't always correct). As a result of this 'optimization', my current code doesn't actually generate the above solution correctly, but it's close enough. More broadly, I consider the goal of largest-rectangles-first to be a "good enough" approach for this application. However I observe that if the goal is instead to find the minimum set (fewest number) of rectangles to completely cover multiple areas, the solution would look like this instead: . 3 . 5 6 8 1 3 4 5 6 8 . 3 4 5 . 8 2 3 4 5 7 8 . 3 . 5 7 8 This second goal actually represents a more optimal solution to the problem, as fewer rectangles usually means fewer commands sent to Dwarf Fortress. However, this approach strikes me as closer to NP-Hard, based on my limited math knowledge. Watch the video if you'd like to better understand the overall strategy; I have not addressed other aspects of Quickfort's process, such as finding the shortest cursor-path that plots all rectangles. Possibly there is a solution to this problem that coherently combines these multiple strategies. Help of any form would be appreciated.

    Read the article

  • Is this linear search implementation actually useful?

    - by Helper Method
    In Matters Computational I found this interesting linear search implementation (it's actually my Java implementation ;-)): public static int linearSearch(int[] a, int key) { int high = a.length - 1; int tmp = a[high]; // put a sentinel at the end of the array a[high] = key; int i = 0; while (a[i] != key) { i++; } // restore original value a[high] = tmp; if (i == high && key != tmp) { return NOT_CONTAINED; } return i; } It basically uses a sentinel, which is the searched for value, so that you always find the value and don't have to check for array boundaries. The last element is stored in a temp variable, and then the sentinel is placed at the last position. When the value is found (remember, it is always found due to the sentinel), the original element is restored and the index is checked if it represents the last index and is unequal to the searched for value. If that's the case, -1 (NOT_CONTAINED) is returned, otherwise the index. While I found this implementation really clever, I wonder if it is actually useful. For small arrays, it seems to be always slower, and for large arrays it only seems to be faster when the value is not found. Any ideas?

    Read the article

  • List of Big-O for PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • Generating different randoms valid for a day on different independent devices?

    - by Pentium10
    Let me describe the system. There are several mobile devices, each independent from each other, and they are generating content for the same record id. I want to avoid generating the same content for the same record on different devices, for this I though I would use a random and make it so too cluster the content pool based on these randoms. Suppose you have choices from 1 to 100. Day 1 Device#1 will choose for the record#33 between 1-10 Device#2 will choose for the record#33 between 40-50 Device#3 will choose for the record#33 between 50-60 Device#1 will choose for the record#55 between 40-50 Device#2 will choose for the record#55 between 1-10 Device#3 will choose for the record#55 between 10-20 Device#1 will choose for the record#11 between 1-10 Device#2 will choose for the record#22 between 1-10 Device#3 will choose for the record#99 between 1-10 Day 2 Device#1 will choose for the record#33 between 90-100 Device#2 will choose for the record#33 between 1-10 Device#3 will choose for the record#33 between 50-60 They don't have access to a central server. Data available for each of them: IMEI (unique per mobile) Date of today (same on all devices) Record id (same on all devices) What do you think, how is it possible? ps. tags can be edited

    Read the article

  • Picking apples off a tree

    - by John Retallack
    I have the following problem: I am given a tree with N apples, for each apple I am given it's weight and height. I can pick apples up to a given height H, each time I pick an apple the height of every apple is increased with U. I have to find out the maximum weight of apples I can pick. 1 = N = 100000 0 < {H, U, apples' weight and height, maximum weight} < 231 Example: N=4 H=100 U=10 height weight 82 30 91 10 93 5 94 15 The answer is 45: first pick the apple with the weight of 15 then the one with the weight of 30. Could someone help me approach this problem?

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >