Search Results

Search found 5638 results on 226 pages for 'scheduling algorithm'.

Page 69/226 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Finding k elements of length-n list that sum to less than t in O(nlogk) time

    - by tresbot
    This is from Programming Pearls ed. 2, Column 2, Problem 8: Given a set of n real numbers, a real number t, and an integer k, how quickly can you determine whether there exists a k-element subset of the set that sums to at most t? One easy solution is to sort and sum the first k elements, which is our best hope to find such a sum. However, in the solutions section Bentley alludes to a solution that takes nlog(k) time, though he gives no hints for how to find it. I've been struggling with this; one thought I had was to go through the list and add all the elements less than t/k (in O(n) time); say there are m1 < k such elements, and they sum to s1 < t. Then we are left needing k - m1 elements, so we can scan through the list again in O(n) time looking for all elements less than (t - s1)/(k - m1). Add in again, to get s2 and m2, then again if m2 < k, look for all elements less than (t - s2)/(k - m2). So: def kSubsetSumUnderT(inList, k, t): outList = [] s = 0 m = 0 while len(outList) < k: toJoin = [i for i in inList where i < (t - s)/(k - m)] if len(toJoin): if len(toJoin) >= k - m: toJoin.sort() if(s0 + sum(toJoin[0:(k - m - 1)]) < t: return True return False outList = outList + toJoin s += sum(toJoin) m += len(toJoin) else: return False My intuition is that this might be the O(nlog(k)) algorithm, but I am having a hard time proving it to myself. Thoughts?

    Read the article

  • Change value of adjacent vertices and remove self loop

    - by StereoMatching
    Try to write a Karger’s algorithm with boost::graph example (first column is vertice, other are adjacent vertices): 1 2 3 2 1 3 4 3 1 2 4 4 2 3 assume I merge 2 to 1, I get the result 1 2 3 2 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 first question : How could I change the adjacent vertices("2" to "1") of vertice 1? my naive solution template<typename Vertex, typename Graph> void change_adjacent_vertices_value(Vertex input, Vertex value, Graph &g) { for (auto it = boost::adjacent_vertices(input, g); it.first != it.second; ++it.first){ if(*it.first == value){ *(it.first) = input; //error C2106: '=' : left operand must be l-value } } } Apparently, I can't set the value of the adjacent vertices to "1" by this way The result I want after "change_adjacent_vertices_value" 1 1 3 1 1 1 3 4 2 1 3 4 3 1 2 4 4 2 3 second question : How could I pop out the adjacent vertices? Assume I want to pop out the consecutive 1 from the vertice 1 The result I expected 1 1 3 1 3 4 2 1 3 4 3 1 2 4 4 2 3 any function like "pop_adjacent_vertex" could use?

    Read the article

  • [Java] RSA BadPaddingException : data must start with zero

    - by Robin Monjo
    Hello everyone. I try to implement an RSA algorithm in a Java program. I am facing the "BadPaddingException : data must start with zero". Here are the methods used to encrypt and decrypt my data : public byte[] encrypt(byte[] input) throws Exception { Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");// cipher.init(Cipher.ENCRYPT_MODE, this.publicKey); return cipher.doFinal(input); } public byte[] decrypt(byte[] input) throws Exception { Cipher cipher = Cipher.getInstance("RSA/ECB/PKCS1Padding");/// cipher.init(Cipher.DECRYPT_MODE, this.privateKey); return cipher.doFinal(input); } privateKey and publicKey attributes are read from files this way : public PrivateKey readPrivKeyFromFile(String keyFileName) throws IOException { PrivateKey key = null; try { FileInputStream fin = new FileInputStream(keyFileName); ObjectInputStream ois = new ObjectInputStream(fin); BigInteger m = (BigInteger) ois.readObject(); BigInteger e = (BigInteger) ois.readObject(); RSAPrivateKeySpec keySpec = new RSAPrivateKeySpec(m, e); KeyFactory fact = KeyFactory.getInstance("RSA"); key = fact.generatePrivate(keySpec); ois.close(); } catch (Exception e) { e.printStackTrace(); } return key; } Private key and Public key are created this way : public void Initialize() throws Exception { KeyPairGenerator keygen = KeyPairGenerator.getInstance("RSA"); keygen.initialize(2048); keyPair = keygen.generateKeyPair(); KeyFactory fact = KeyFactory.getInstance("RSA"); RSAPublicKeySpec pub = fact.getKeySpec(keyPair.getPublic(), RSAPublicKeySpec.class); RSAPrivateKeySpec priv = fact.getKeySpec(keyPair.getPrivate(), RSAPrivateKeySpec.class); saveToFile("public.key", pub.getModulus(), pub.getPublicExponent()); saveToFile("private.key", priv.getModulus(), priv.getPrivateExponent()); } and then saved in files : public void saveToFile(String fileName, BigInteger mod, BigInteger exp) throws IOException { FileOutputStream f = new FileOutputStream(fileName); ObjectOutputStream oos = new ObjectOutputStream(f); oos.writeObject(mod); oos.writeObject(exp); oos.close(); } I can't figured out how the problem come from. Any help would be appreciate ! Thanks in advance.

    Read the article

  • The Collatz Sequence problem

    - by Gandalf StormCrow
    I'm trying to solve this problem, its not a homework question, its just code I'm submitting to uva.onlinejudge.org so I can learn better java trough examples. Here is the problem sample input : 3 100 34 100 75 250 27 2147483647 101 304 101 303 -1 -1 Here is simple output : Case 1: A = 3, limit = 100, number of terms = 8 Case 2: A = 34, limit = 100, number of terms = 14 Case 3: A = 75, limit = 250, number of terms = 3 Case 4: A = 27, limit = 2147483647, number of terms = 112 Case 5: A = 101, limit = 304, number of terms = 26 Case 6: A = 101, limit = 303, number of terms = 1 The thing is this has to execute within 3sec time interval otherwise your question won't be accepted as solution, here is with what I've come up so far, its working as it should just the execution time is not within 3 seconds, here is code : import java.util.Scanner; class Main { public static void main(String[] args) { Scanner stdin = new Scanner(System.in); int start; int limit; int terms; int a = 0; while (stdin.hasNext()) { start = stdin.nextInt(); limit = stdin.nextInt(); if (start > 0) { terms = getLength(start, limit); a++; } else { break; } System.out.println("Case "+a+": A = "+start+", limit = "+limit+", number of terms = "+terms); } } public static int getLength(int x, int y) { int length = 1; while (x != 1) { if (x <= y) { if ( x % 2 == 0) { x = x / 2; length++; }else{ x = x * 3 + 1; length++; } } else { length--; break; } } return length; } } And yes here is how its meant to be solved : An algorithm given by Lothar Collatz produces sequences of integers, and is described as follows: Step 1: Choose an arbitrary positive integer A as the first item in the sequence. Step 2: If A = 1 then stop. Step 3: If A is even, then replace A by A / 2 and go to step 2. Step 4: If A is odd, then replace A by 3 * A + 1 and go to step 2. And yes my question is how can I make it work inside 3 seconds time interval?

    Read the article

  • Are there any working implementations of the rolling hash function used in the Rabin-Karp string sea

    - by c14ppy
    I'm looking to use a rolling hash function so I can take hashes of n-grams of a very large string. For example: "stackoverflow", broken up into 5 grams would be: "stack", "tacko", "ackov", "ckove", "kover", "overf", "verfl", "erflo", "rflow" This is ideal for a rolling hash function because after I calculate the first n-gram hash, the following ones are relatively cheap to calculate because I simply have to drop the first letter of the first hash and add the new last letter of the second hash. I know that in general this hash function is generated as: H = c1ak - 1 + c2ak - 2 + c3ak - 3 + ... + cka0 where a is a constant and c1,...,ck are the input characters. If you follow this link on the Rabin-Karp string search algorithm , it states that "a" is usually some large prime. I want my hashes to be stored in 32 bit integers, so how large of a prime should "a" be, such that I don't overflow my integer? Does there exist an existing implementation of this hash function somewhere that I could already use? Here is an implementation I created: public class hash2 { public int prime = 101; public int hash(String text) { int hash = 0; for(int i = 0; i < text.length(); i++) { char c = text.charAt(i); hash += c * (int) (Math.pow(prime, text.length() - 1 - i)); } return hash; } public int rollHash(int previousHash, String previousText, String currentText) { char firstChar = previousText.charAt(0); char lastChar = currentText.charAt(currentText.length() - 1); int firstCharHash = firstChar * (int) (Math.pow(prime, previousText.length() - 1)); int hash = (previousHash - firstCharHash) * prime + lastChar; return hash; } public static void main(String[] args) { hash2 hashify = new hash2(); int firstHash = hashify.hash("mydog"); System.out.println(firstHash); System.out.println(hashify.hash("ydogr")); System.out.println(hashify.rollHash(firstHash, "mydog", "ydogr")); } } I'm using 101 as my prime. Does it matter if my hashes will overflow? I think this is desirable but I'm not sure. Does this seem like the right way to go about this?

    Read the article

  • J: Self-reference in bubble sort tacit implementation

    - by Yasir Arsanukaev
    Hello people! Since I'm beginner in J I've decided to solve a simple task using this language, in particular implementing the bubblesort algorithm. I know it's not idiomatically to solve such kind of problem in functional languages, because it's naturally solved using array element transposition in imperative languages like C, rather than constructing modified list in declarative languages. However this is the code I've written: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # Let's apply it to an array: (((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # 5 3 8 7 2 2 3 5 7 8 The thing that confuses me is $: referring to the statement within the outermost parentheses. Help says that: $: denotes the longest verb that contains it. The other book (~ 300 KiB) says: 3+4 7 5*20 100 Symbols like + and * for plus and times in the above phrases are called verbs and represent functions. You may have more than one verb in a J phrase, in which case it is constructed like a sentence in simple English by reading from left to right, that is 4+6%2 means 4 added to whatever follows, namely 6 divided by 2. Let's rewrite my code snippet omitting outermost ()s: ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # 5 3 8 7 2 2 3 5 7 8 Reuslts are the same. I couldn't explain myself why this works, why only ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is treated as the longest verb for $: but not the whole expression ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) ^: # and not just (<./@(2&{.)), $:@((>./@(2&{.)),2&}.), because if ((<./@(2&{.)), $:@((>./@(2&{.)),2&}.)) ^: (1<#) is a verb, it should also form another verb after conjunction with #, i. e. one might treat the whole sentence (first snippet) as a verb. Probably there's some limit for the verb length limited by one conjunction. Look at the following code (from here): factorial =: (* factorial@<:) ^: (1&<) factorial 4 24 factorial within expression refers to the whole function, i. e. (* factorial@<:) ^: (1&<). Following this example I've used a function name instead of $:: bubblesort =: (((<./@(2&{.)), bubblesort@((>./@(2&{.)),2&}.)) ^: (1<#)) ^: # bubblesort 5 3 8 7 2 2 3 5 7 8 I expected bubblesort to refer to the whole function, but it doesn't seem true for me since the result is correct. Also I'd like to see other implementations if you have ones, even slightly refactored. Thanks.

    Read the article

  • Miller-rabin exception number?

    - by nightcracker
    Hey everyone. This question is about the number 169716931325235658326303. According to http://www.alpertron.com.ar/ECM.HTM it is prime. According to my miller-rabin implementation in python with 7 repetitions is is composite. With 50 repetitions it is still composite. With 5000 repetitions it is STILL composite. I thought, this might be a problem of my implementation. So I tried GNU MP bignum library, which has a miller-rabin primality test built-in. I tested with 1000000 repetitions. Still composite. This is my implementation of the miller-rabin primality test: def isprime(n, precision=7): if n == 1 or n % 2 == 0: return False elif n < 1: raise ValueError("Out of bounds, first argument must be > 0") d = n - 1 s = 0 while d % 2 == 0: d //= 2 s += 1 for repeat in range(precision): a = random.randrange(2, n - 2) x = pow(a, d, n) if x == 1 or x == n - 1: continue for r in range(s - 1): x = pow(x, 2, n) if x == 1: return False if x == n - 1: break else: return False return True And the code for the GMP test: #include <gmp.h> #include <stdio.h> int main(int argc, char* argv[]) { mpz_t test; mpz_init_set_str(test, "169716931325235658326303", 10); printf("%d\n", mpz_probab_prime_p(test, 1000000)); mpz_clear(test); return 0; } As far as I know there are no "exceptions" (which return false positives for any amount of repetitions) to the miller-rabin primality test. Have I stumpled upon one? Is my computer broken? Is the Elliptic Curve Method wrong? What is happening here? EDIT I found the issue, which is http://www.alpertron.com.ar/ECM.HTM. I trusted this applet, I'll contact the author his applet's implementation of the ECM is faulty for this number. Thanks. EDIT2 Hah, the shame! In the end it was something that went wrong with copy/pasting on my side. NOR the applet NOR the miller-rabin algorithm NOR my implementation NOR gmp's implementation of it is wrong, the only thing that's wrong is me. I'm sorry.

    Read the article

  • Drawing Bresenham’s Line- Algorithm in all quadrants

    - by Yoyo2965259
    I am newbie for OpenGL. I am practicing the exercises from my textbook but I could not get the outputs which is should be in Bresenham's Line Algorithm in all quadrants. Here's the coding: #include <Windows.h> #include <GL/glut.h> void init(void) { glClearColor(0.0, 0.0, 0.0, 0.0); glShadeModel(GL_FLAT); } void BresnCir(void) { int delta, deltadash; glClear(GL_COLOR_BUFFER_BIT); glPointSize(3.0); int r = 150; int x = 0; int y = r; int D = 2 * (1 - r); glBegin(GL_POINTS); do { glVertex2i(x, y); if (D < 0) { delta = 2 * D + 2 * y - 1; if (delta <= 0) { x++; Right(x); } else { x++; y--; Diagonal(x, y); } glVertex2i(x, y); } else { deltadash = 2 * D - 2 * x - 1; if (deltadash <= 0) { x++; y--; Diagonal(x, y); } else { y--; Down(y); } glVertex2i(x, y); } if (D == 0) { x++; y--; Diagonal(x, y); glVertex2i(x, y); } } while (y > 0); glEnd(); glFlush(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(400, 150); glutInitWindowPosition(100, 100); glutCreateWindow(argv[0]); init(); glutDisplayFunc(BresnCir); glutMainLoop(); return 0; } But, it keep comes out with errors C3861.

    Read the article

  • Finding what makes strings unique in a list, can you improve on brute force?

    - by Ed Guiness
    Suppose I have a list of strings where each string is exactly 4 characters long and unique within the list. For each of these strings I want to identify the position of the characters within the string that make the string unique. So for a list of three strings abcd abcc bbcb For the first string I want to identify the character in 4th position d since d does not appear in the 4th position in any other string. For the second string I want to identify the character in 4th position c. For the third string it I want to identify the character in 1st position b AND the character in 4th position, also b. This could be concisely represented as abcd -> ...d abcc -> ...c bbcb -> b..b If you consider the same problem but with a list of binary numbers 0101 0011 1111 Then the result I want would be 0101 -> ..0. 0011 -> .0.. 1111 -> 1... Staying with the binary theme I can use XOR to identify which bits are unique within two binary numbers since 0101 ^ 0011 = 0110 which I can interpret as meaning that in this case the 2nd and 3rd bits (reading left to right) are unique between these two binary numbers. This technique might be a red herring unless somehow it can be extended to the larger list. A brute-force approach would be to look at each string in turn, and for each string to iterate through vertical slices of the remainder of the strings in the list. So for the list abcd abcc bbcb I would start with abcd and iterate through vertical slices of abcc bbcb where these vertical slices would be a | b | c | c b | b | c | b or in list form, "ab", "bb", "cc", "cb". This would result in four comparisons a : ab -> . (a is not unique) b : bb -> . (b is not unique) c : cc -> . (c is not unique) d : cb -> d (d is unique) or concisely abcd -> ...d Maybe it's wishful thinking, but I have a feeling that there should be an elegant and general solution that would apply to an arbitrarily large list of strings (or binary numbers). But if there is I haven't yet been able to see it. I hope to use this algorithm to to derive minimal signatures from a collection of unique images (bitmaps) in order to efficiently identify those images at a future time. If future efficiency wasn't a concern I would use a simple hash of each image. Can you improve on brute force?

    Read the article

  • Grouping geographical shapes

    - by grenade
    I am using Dundas Maps and attempting to draw a map of the world where countries are grouped into regions that are specific to a business implementation. I have shape data (points and segments) for each country in the world. I can combine countries into regions by adding all points and segments for countries within a region to a new region shape. foreach(var region in GetAllRegions()){ var regionShape = new Shape { Name = region.Name }; foreach(var country in GetCountriesInRegion(region.Id)){ var countryShape = GetCountryShape(country.Id); regionShape.AddSegments(countryShape.ShapeData.Points, countryShape.ShapeData.Segments); } map.Shapes.Add(regionShape); } The problem is that the country border lines still show up within a region and I want to remove them so that only regional borders show up. Dundas polygons must start and end at the same point. This is the case for all the country shapes. Now I need an algorithm that can: Determine where country borders intersect at a regional border, so that I can join the regional border segments. Determine which country borders are not regional borders so that I can discard them. Sort the resulting regional points so that they sequentialy describe the shape boundaries. Below is where I have gotten to so far with the map. You can see that the country borders still need to be removed. For example, the border between Mongolia and China should be discarded whereas the border between Mongolia and Russia should be retained. The reason I need to retain a regional border is that the region colors will be significant in conveying information but adjacent regions may be the same color. The regions can change to include or exclude countries and this is why the regional shaping must be dynamic. EDIT: I now know that I what I am looking for is a UNION of polygons. David Lean explains how to do it using the spatial functions in SQL Server 2008 which might be an option but my efforts have come to a halt because the resulting polygon union is so complex that SQL truncates it at 43,680 characters. I'm now trying to either find a workaround for that or find a way of doing the union in code.

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • Finding the most frequent subtrees in a collection of (parse) trees

    - by peter.murray.rust
    I have a collection of trees whose nodes are labelled (but not uniquely). Specifically the trees are from a collection of parsed sentences (see http://en.wikipedia.org/wiki/Treebank). I wish to extract the most common subtrees from the collection - performance is not (yet) an issue. I'd be grateful for algorithms (ideally Java) or pointers to tools which do this for treebanks. Note that order of child nodes is important. EDIT @mjv. We are working in a limited domain (chemistry) which has a stylised language so the varirty of the trees is not huge - probably similar to children's readers. Simple tree for "the cat sat on the mat". <sentence> <nounPhrase> <article/> <noun/> </nounPhrase> <verbPhrase> <verb/> <prepositionPhrase> <preposition/> <nounPhrase> <article/> <noun/> </nounPhrase> </prepositionPhrase> </verbPhrase> </sentence> Here the sentence contains two identical part-of-speech subtrees (the actual tokens "cat". "mat" are not important in matching). So the algorithm would need to detect this. Note that not all nounPhrases are identical - "the big black cat" could be: <nounPhrase> <article/> <adjective/> <adjective/> <noun/> </nounPhrase> The length of sentences will be longer - between 15 to 30 nodes. I would expect to get useful results from 1000 trees. If this does not take more than a day or so that's acceptable. Obviously the shorter the tree the more frequent, so nounPhrase will be very common. EDIT If this is to be solved by flattening the tree then I think it would be related to Longest Common Substring, not Longest Common Sequence. But note that I don't necessarily just want the longest - I want a list of all those long enough to be "interesting" (criterion yet to be decided).

    Read the article

  • Throwing cats out of windows

    - by AndrewF
    Imagine you're in a tall building with a cat. The cat can survive a fall out of a low story window, but will die if thrown from a high floor. How can you figure out the longest drop that the cat can survive, using the least number of attempts? Obviously, if you only have one cat, then you can only search linearly. First throw the cat from the first floor. If it survives, throw it from the second. Eventually, after being thrown from floor f, the cat will die. You then know that floor f-1 was the maximal safe floor. But what if you have more than one cat? You can now try some sort of logarithmic search. Let's say that the build has 100 floors and you have two identical cats. If you throw the first cat out of the 50th floor and it dies, then you only have to search 50 floors linearly. You can do even better if you choose a lower floor for your first attempt. Let's say that you choose to tackle the problem 20 floors at a time and that the first fatal floor is #50. In that case, your first cat will survive flights from floors 20 and 40 before dying from floor 60. You just have to check floors 41 through 49 individually. That's a total of 12 attempts, which is much better than the 50 you would need had you attempted to use binary elimination. In general, what's the best strategy and it's worst-case complexity for an n-storied building with 2 cats? What about for n floors and m cats? Assume that all cats are equivalent: they will all survive or die from a fall from a given window. Also, every attempt is independent: if a cat survives a fall, it is completely unharmed. This isn't homework, although I may have solved it for school assignment once. It's just a whimsical problem that popped into my head today and I don't remember the solution. Bonus points if anyone knows the name of this problem or of the solution algorithm.

    Read the article

  • Calculate minimum moves to solve a puzzle

    - by Luke
    I'm in the process of creating a game where the user will be presented with 2 sets of colored tiles. In order to ensure that the puzzle is solvable, I start with one set, copy it to a second set, then swap tiles from one set to another. Currently, (and this is where my issue lies) the number of swaps is determined by the level the user is playing - 1 swap for level 1, 2 swaps for level 2, etc. This same number of swaps is used as a goal in the game. The user must complete the puzzle by swapping a tile from one set to the other to make the 2 sets match (by color). The order of the tiles in the (user) solved puzzle doesn't matter as long as the 2 sets match. The problem I have is that as the number of swaps I used to generate the puzzle approaches the number of tiles in each set, the puzzle becomes easier to solve. Basically, you can just drag from one set in whatever order you need for the second set and solve the puzzle with plenty of moves left. What I am looking to do is after I finish building the puzzle, calculate the minimum number of moves required to solve the puzzle. Again, this is almost always less than the number of swaps used to create the puzzle, especially as the number of swaps approaches the number of tiles in each set. My goal is to calculate the best case scenario and then give the user a "fudge factor" (i.e. 1.2 times the minimum number of moves). Solving the puzzle in under this number of moves will result in passing the level. A little background as to how I currently have the game configured: Levels 1 to 10: 9 tiles in each set. 5 different color tiles. Levels 11 to 20: 12 tiles in each set. 7 different color tiles. Levels 21 to 25: 15 tiles in each set. 10 different color tiles. Swapping within a set is not allowed. For each level, there will be at least 2 tiles of a given color (one for each set in the solved puzzle). Is there any type of algorithm anyone could recommend to calculate the minimum number of moves to solve a given puzzle?

    Read the article

  • Calendar Day View in PHP

    - by JamesArmes
    I'm working on adding a day view option to an existing calendar solution. Like many people implementing their own calendars, I am trying to model Google Calendars. They have an excellent calendar solution and their day view provides a lot of flexibility. For most part, the implementation is going well; however, I'm having issues when it comes to conflicting events. Essentially, I want the events to share the same space, side by side. Events that start at the same time should have the longest event first. In the example data set I'm working with, I have four events: A: 10:30 - 11:30 B: 13:30 - 14:30 C: 10:30 - 11:00 D: 10:45 - 14:00 I can handle A, C, and D just fine, the problem comes with D. A should be left of C which should be left of D; each taking one third of the width (it's fixed width so I can do simple math to figure that out). The problem is that B should be under A and C, to the left of D. Ideally, B would take up the same amount of space as A and C (two thirds width), but I would even settle for it only taking up only one third width. My array of events looks similar to the following: $events = array( '1030' => array( 'uniqueID1' => array( 'start_time' => '1030', 'end_time' => '1130', ), 'uniqueID2' => array( 'start_time' => '1030', 'end_time' => '1100', ), ), '1045' => array( 'uniqueID3' => array( 'start_time' => '1045', 'end_time' => '1400', ), ), '1330' => array( 'uniqueID3' => array( 'start_time' => '1330', 'end_time' => '1430', ), ), ); My plan is to add some indexes to each event that include how many events it conflicts with (so I can calculate the width) and which position in that series it should be (so I can calculate the left value). However, that doesn't help the B. I'm thinking I might need an algorithm that uses some basic geometry and matrices, but I'm not sure where to begin. Any help is greatly appreciated.

    Read the article

  • Symfony, in remote host: Error 500. Unknown record property / related component "algorithm" on "sfGu

    - by user248959
    Hi, after deploying, i get the error below after loggingin. Sf 1.3, sfDoctrineGuardPlugin. And i have this schema.yml in config/doctrine: Usuario: inheritance: extends: sfGuardUser type: simple columns: username: type: string(128) notnull: false unique: true nombre_apellidos: string(60) sexo: string(5) fecha_nac: date provincia: string(60) localidad: string(255) email_address: string(255) fotografia: string(255) avatar: string(255) avatar_mensajes: string(255) relations: Usuario: local: user1_id foreign: user2_id refClass: AmigoUsuario equal: true 500 | Internal Server Error | Doctrine_Record_UnknownPropertyException Unknown record property / related component "algorithm" on "sfGuardUser" stack trace * at () in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record/Filter/Standard.php line 55 ... 52. */ 53. public function filterGet(Doctrine_Record $record, $name) 54. { 55. throw new Doctrine_Record_UnknownPropertyException(sprintf('Unknown record property / related component "%s" on "%s"', $name, get_class($record))); 56. } 57. } * at Doctrine_Record_Filter_Standard->filterGet(object('sfGuardUser'), 'algorithm') in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1382 ... 1379. $success = false; 1380. foreach ($this->_table->getFilters() as $filter) { 1381. try { 1382. $value = $filter->filterGet($this, $fieldName); 1383. $success = true; 1384. } catch (Doctrine_Exception $e) {} 1385. } * at Doctrine_Record->_get('algorithm', 1) in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/vendor/doctrine/Doctrine/Record.php line 1337 ... 1334. return $this->$accessor($load); 1335. } 1336. } 1337. return $this->_get($fieldName, $load); 1338. } 1339. 1340. protected function _get($fieldName, $load = true) * at Doctrine_Record->get('algorithm') in SF_ROOT_DIR/lib/vendor/symfony/lib/plugins/sfDoctrinePlugin/lib/record/sfDoctrineRecord.class.php line 212 ... 209. return call_user_func_array( 210. array($this, $verb), 211. array_merge(array($entityName), $arguments) 212. ); 213. } else { 214. $failed = true; 215. } * at sfDoctrineRecord->__call(array(object('sfGuardUser'), 'get'), array('algorithm')) in n/a line n/a ... * at sfGuardUser->getAlgorithm('getAlgorithm', array()) in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 96 ... 93. */ 94. public function checkPasswordByGuard($password) 95. { 96. $algorithm = $this->getAlgorithm(); 97. if (false !== $pos = strpos($algorithm, '::')) 98. { 99. $algorithm = array(substr($algorithm, 0, $pos), substr($algorithm, $pos + 2)); * at PluginsfGuardUser->checkPasswordByGuard() in SF_ROOT_DIR/plugins/sfDoctrineGuardPlugin/lib/model/doctrine/PluginsfGuardUser.class.php line 83 ... 80. } 81. else 82. { 83. return $this->checkPasswordByGuard($password); 84. } 85. } 86. * at PluginsfGuardUser->checkPassword('m') in SF_ROOT_DIR/lib/sfGuardValidatorUserByEmail.class.php line 28 ... 25. { 26. // password is ok? 27. 28. if ($user->checkPassword($password)) 29. { 30. 31. //die("entro"); * at sfGuardValidatorUserByEmail->doClean('m') Any idea? Javi

    Read the article

  • Is it possible to shuffle a 2D matrix while preserving row AND column frequencies?

    - by j_random_hacker
    Suppose I have a 2D array like the following: GACTG AGATA TCCGA Each array element is taken from a small finite set (in my case, DNA nucleotides -- {A, C, G, T}). I would like to randomly shuffle this array somehow while preserving both row and column nucleotide frequencies. Is this possible? Can it be done efficiently? [EDIT]: By this I mean I want to produce a new matrix where each row has the same number of As, Cs, Gs and Ts as the corresponding row of the original matrix, and where each column has the same number of As, Cs, Gs and Ts as the corresponding column of the original matrix. Permuting the rows or columns of the original matrix will not achieve this in general. (E.g. for the example above, the top row has 2 Gs, and 1 each of A, C and T; if this row was swapped with row 2, the top row in the resulting matrix would have 3 As, 1 G and 1 T.) It's simple enough to preserve just column frequencies by shuffling a column at a time, and likewise for rows. But doing this will in general alter the frequencies of the other kind. My thoughts so far: If it's possible to pick 2 rows and 2 columns so that the 4 elements at the corners of this rectangle have the pattern XY YX for some pair of distinct elements X and Y, then replacing these 4 elements with YX XY will maintain both row and column frequencies. In the example at the top, this can be done for (at least) rows 1 and 2 and columns 2 and 5 (whose corners give the 2x2 matrix AG;GA), and for rows 1 and 3 and columns 1 and 4 (whose corners give GT;TG). Clearly this could be repeated a number of times to produce some level of randomisation. Generalising a bit, any "subrectangle" induced by a subset of rows and a subset of columns, in which the frequencies of all rows are the same and the frequencies of all columns are the same, can have both its rows and columns permuted to produce a valid complete rectangle. (Of these, only those subrectangles in which at least 1 element is changed are actually interesting.) Big questions: Are all valid complete matrices reachable by a series of such "subrectangle rearrangements"? I suspect the answer is yes. Are all valid subrectangle rearrangements decomposable into a series of 2x2 swaps? I suspect the answer is no, but I hope it's yes, since that would seem to make it easier to come up with an efficient algorithm. Can some or all of the valid rearrangements be computed efficiently? This question addresses a special case in which the set of possible elements is {0, 1}. The solutions people have come up with there are similar to what I have come up with myself, and are probably usable, but not ideal as they require an arbitrary amount of backtracking to work correctly. Also I'm concerned that only 2x2 swaps are considered. Finally, I would ideally like a solution that can be proven to select a matrix uniformly at random from the set of all matrices having identical row frequencies and column frequencies to the original. I know, I'm asking for a lot :)

    Read the article

  • Calculate the number of ways to roll a certain number

    - by helloworld
    I'm a high school Computer Science student, and today I was given a problem to: Program Description: There is a belief among dice players that in throwing three dice a ten is easier to get than a nine. Can you write a program that proves or disproves this belief? Have the computer compute all the possible ways three dice can be thrown: 1 + 1 + 1, 1 + 1 + 2, 1 + 1 + 3, etc. Add up each of these possibilities and see how many give nine as the result and how many give ten. If more give ten, then the belief is proven. I quickly worked out a brute force solution, as such int sum,tens,nines; tens=nines=0; for(int i=1;i<=6;i++){ for(int j=1;j<=6;j++){ for(int k=1;k<=6;k++){ sum=i+j+k; //Ternary operators are fun! tens+=((sum==10)?1:0); nines+=((sum==9)?1:0); } } } System.out.println("There are "+tens+" ways to roll a 10"); System.out.println("There are "+nines+" ways to roll a 9"); Which works just fine, and a brute force solution is what the teacher wanted us to do. However, it doesn't scale, and I am trying to find a way to make an algorithm that can calculate the number of ways to roll n dice to get a specific number. Therefore, I started generating the number of ways to get each sum with n dice. With 1 die, there is obviously 1 solution for each. I then calculated, through brute force, the combinations with 2 and 3 dice. These are for two: There are 1 ways to roll a 2 There are 2 ways to roll a 3 There are 3 ways to roll a 4 There are 4 ways to roll a 5 There are 5 ways to roll a 6 There are 6 ways to roll a 7 There are 5 ways to roll a 8 There are 4 ways to roll a 9 There are 3 ways to roll a 10 There are 2 ways to roll a 11 There are 1 ways to roll a 12 Which looks straightforward enough; it can be calculated with a simple linear absolute value function. But then things start getting trickier. With 3: There are 1 ways to roll a 3 There are 3 ways to roll a 4 There are 6 ways to roll a 5 There are 10 ways to roll a 6 There are 15 ways to roll a 7 There are 21 ways to roll a 8 There are 25 ways to roll a 9 There are 27 ways to roll a 10 There are 27 ways to roll a 11 There are 25 ways to roll a 12 There are 21 ways to roll a 13 There are 15 ways to roll a 14 There are 10 ways to roll a 15 There are 6 ways to roll a 16 There are 3 ways to roll a 17 There are 1 ways to roll a 18 So I look at that, and I think: Cool, Triangular numbers! However, then I notice those pesky 25s and 27s. So it's obviously not triangular numbers, but still some polynomial expansion, since it's symmetric. So I take to Google, and I come across this page that goes into some detail about how to do this with math. It is fairly easy(albeit long) to find this using repeated derivatives or expansion, but it would be much harder to program that for me. I didn't quite understand the second and third answers, since I have never encountered that notation or those concepts in my math studies before. Could someone please explain how I could write a program to do this, or explain the solutions given on that page, for my own understanding of combinatorics? EDIT: I'm looking for a mathematical way to solve this, that gives an exact theoretical number, not by simulating dice

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • Prim's MST algorithm implementation with Java

    - by user1290164
    I'm trying to write a program that'll find the MST of a given undirected weighted graph with Kruskal's and Prim's algorithms. I've successfully implemented Kruskal's algorithm in the program, but I'm having trouble with Prim's. To be more precise, I can't figure out how to actually build the Prim function so that it'll iterate through all the vertices in the graph. I'm getting some IndexOutOfBoundsException errors during program execution. I'm not sure how much information is needed for others to get the idea of what I have done so far, but hopefully there won't be too much useless information. This is what I have so far: I have a Graph, Edge and a Vertex class. Vertex class mostly just an information storage that contains the name (number) of the vertex. Edge class can create a new Edge that has gets parameters (Vertex start, Vertex end, int edgeWeight). The class has methods to return the usual info like start vertex, end vertex and the weight. Graph class reads data from a text file and adds new Edges to an ArrayList. The text file also tells us how many vertecis the graph has, and that gets stored too. In the Graph class, I have a Prim() -method that's supposed to calculate the MST: public ArrayList<Edge> Prim(Graph G) { ArrayList<Edge> edges = G.graph; // Copies the ArrayList with all edges in it. ArrayList<Edge> MST = new ArrayList<Edge>(); Random rnd = new Random(); Vertex startingVertex = edges.get(rnd.nextInt(G.returnVertexCount())).returnStartingVertex(); // This is just to randomize the starting vertex. // This is supposed to be the main loop to find the MST, but this is probably horribly wrong.. while (MST.size() < returnVertexCount()) { Edge e = findClosestNeighbour(startingVertex); MST.add(e); visited.add(e.returnStartingVertex()); visited.add(e.returnEndingVertex()); edges.remove(e); } return MST; } The method findClosesNeighbour() looks like this: public Edge findClosestNeighbour(Vertex v) { ArrayList<Edge> neighbours = new ArrayList<Edge>(); ArrayList<Edge> edges = graph; for (int i = 0; i < edges.size() -1; ++i) { if (edges.get(i).endPoint() == s.returnVertexID() && !visited(edges.get(i).returnEndingVertex())) { neighbours.add(edges.get(i)); } } return neighbours.get(0); // This is the minimum weight edge in the list. } ArrayList<Vertex> visited and ArrayList<Edges> graph get constructed when creating a new graph. Visited() -method is simply a boolean check to see if ArrayList visited contains the Vertex we're thinking about moving to. I tested the findClosestNeighbour() independantly and it seemed to be working but if someone finds something wrong with it then that feedback is welcome also. Mainly though as I mentioned my problem is with actually building the main loop in the Prim() -method, and if there's any additional info needed I'm happy to provide it. Thank you. Edit: To clarify what my train of thought with the Prim() method is. What I want to do is first randomize the starting point in the graph. After that, I will find the closest neighbor to that starting point. Then we'll add the edge connecting those two points to the MST, and also add the vertices to the visited list for checking later, so that we won't form any loops in the graph. Here's the error that gets thrown: Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(Unknown Source) at java.util.ArrayList.get(Unknown Source) at Graph.findClosestNeighbour(graph.java:203) at Graph.Prim(graph.java:179) at MST.main(MST.java:49) Line 203: return neighbour.get(0); in findClosestNeighbour() Line 179: Edge e = findClosestNeighbour(startingVertex); in Prim()

    Read the article

  • Gradient algororithm produces little white dots

    - by user146780
    I'm working on an algorithm to generate point to point linear gradients. I have a rough, proof of concept implementation done: GLuint OGLENGINEFUNCTIONS::CreateGradient( std::vector<ARGBCOLORF> &input,POINTFLOAT start, POINTFLOAT end, int width, int height,bool radial ) { std::vector<POINT> pol; std::vector<GLubyte> pdata(width * height * 4); std::vector<POINTFLOAT> linearpts; std::vector<float> lookup; float distance = GetDistance(start,end); RoundNumber(distance); POINTFLOAT temp; float incr = 1 / (distance + 1); for(int l = 0; l < 100; l ++) { POINTFLOAT outA; POINTFLOAT OutB; float dirlen; float perplen; POINTFLOAT dir; POINTFLOAT ndir; POINTFLOAT perp; POINTFLOAT nperp; POINTFLOAT perpoffset; POINTFLOAT diroffset; dir.x = end.x - start.x; dir.y = end.y - start.y; dirlen = sqrt((dir.x * dir.x) + (dir.y * dir.y)); ndir.x = static_cast<float>(dir.x * 1.0 / dirlen); ndir.y = static_cast<float>(dir.y * 1.0 / dirlen); perp.x = dir.y; perp.y = -dir.x; perplen = sqrt((perp.x * perp.x) + (perp.y * perp.y)); nperp.x = static_cast<float>(perp.x * 1.0 / perplen); nperp.y = static_cast<float>(perp.y * 1.0 / perplen); perpoffset.x = static_cast<float>(nperp.x * l * 0.5); perpoffset.y = static_cast<float>(nperp.y * l * 0.5); diroffset.x = static_cast<float>(ndir.x * 0 * 0.5); diroffset.y = static_cast<float>(ndir.y * 0 * 0.5); outA.x = end.x + perpoffset.x + diroffset.x; outA.y = end.y + perpoffset.y + diroffset.y; OutB.x = start.x + perpoffset.x - diroffset.x; OutB.y = start.y + perpoffset.y - diroffset.y; for (float i = 0; i < 1; i += incr) { temp = GetLinearBezier(i,outA,OutB); RoundNumber(temp.x); RoundNumber(temp.y); linearpts.push_back(temp); lookup.push_back(i); } for (unsigned int j = 0; j < linearpts.size(); j++) { if(linearpts[j].x < width && linearpts[j].x >= 0 && linearpts[j].y < height && linearpts[j].y >=0) { pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 0] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 1] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 2] = (GLubyte) j; pdata[linearpts[j].x * 4 * width + linearpts[j].y * 4 + 3] = (GLubyte) 255; } } lookup.clear(); linearpts.clear(); } return CreateTexture(pdata,width,height); } It works as I would expect most of the time, but at certain angles it produces little white dots. I can't figure out what does this. This is what it looks like at most angles (good) http://img9.imageshack.us/img9/5922/goodgradient.png But once in a while it looks like this (bad): http://img155.imageshack.us/img155/760/badgradient.png What could be causing the white dots? Is there maybe also a better way to generate my gradients if no solution is possible for this? Thanks

    Read the article

  • What is wrong with my logic for the divide and conquer algorithm for Closest pair problem?

    - by Programming Noob
    I have been following Coursera's course on Algorithms and came up with a thought about the divide/conquer algorithm for the closest pair problem, that I want clarified. As per Prof Roughgarden's algorithm (which you can see here if you're interested): For a given set of points P, of which we have two copies - sorted in X and Y direction - Px and Py, the algorithm can be given as closestPair(Px,Py): Divide points into left half - Q, and right half - R, and form sorted copies of both halves along x and y directions - Qx,Qy,Rx,Ry Let closestPair(Qx,Qy) be points p1 and q1 Let closestPair(Rx,Ry) be p2,q2 Let delta be minimum of dist(p1,q1) and dist(p2,q2) This is the unfortunate case, let p3,q3 be the closestSplitPair(Px,Py,delta) Return the best result Now, the clarification that I want is related to step 5. I should say this beforehand, that what I'm suggesting, is barely any improvement at all, but if you're still interested, read ahead. Prof R says that since the points are already sorted in X and Y directions, to find the best pair in step 5, we need to iterate over points in the strip of width 2*delta, starting from bottom to up, and in the inner loop we need only 7 comparisions. Can this be bettered to just one? How I think is possible seemed a little difficult to explain in plain text, so I drew a diagram and wrote it on paper and uploaded it here: Since no one else came up with is, I'm pretty sure there's some error in my line of thought. But I have literally been thinking about this for HOURS now, and I just HAD to post this. It's all that is in my head. Can someone point out where I'm going wrong?

    Read the article

  • If your algorithm is correct, does it matter how long it took you to write it?

    - by John Isaacks
    I recently found out that Facebook had a programming challenge that if completed correctly you automatically get a phone interview. There is a sample challenge that asks you to write an algorithm that can solve a Tower of Hanoi type problem. Given a number of pegs and discs, an initial and final configuration; Your algorithm must determine the fewest steps possible to get to the final configuration and output the steps. This sample challenge gives you a 45 minute time limit but allows you to still test your code to see if it passes once your time limit expires. I did not know of any cute math solution that could solve it, and I didn't want to look for one since I think that would be cheating. So I tried to solve the challenge the best I could on my own. I was able to make an algorithm that worked and passed. However, it took me over 4 hours to make, much longer than the 45 minute requirement. Since it took me so much longer than the allotted time, I have not attempted the actual challenge. This got me wondering though, in reality does it really matter that it took me that long? I mean is this a sign that I will not be able to get a job at a place like this (not just Facebook, but Google, Fog Creek, etc.) and need to lower my aspirations, or does the fact that I actually passed on my first attempt even though it took too long be taken as good?

    Read the article

  • questions regarding the use of A* with the 15-square puzzle

    - by Cheeso
    I'm trying to build an A* solver for a 15-square puzzle. The goal is to re-arrange the tiles so that they appear in their natural positions. You can only slide one tile at a time. Each possible state of the puzzle is a node in the search graph. For the h(x) function, I am using an aggregate sum, across all tiles, of the tile's dislocation from the goal state. In the above image, the 5 is at location 0,0, and it belongs at location 1,0, therefore it contributes 1 to the h(x) function. The next tile is the 11, located at 0,1, and belongs at 2,2, therefore it contributes 3 to h(x). And so on. EDIT: I now understand this is what they call "Manhattan distance", or "taxicab distance". I have been using a step count for g(x). In my implementation, for any node in the state graph, g is just +1 from the prior node's g. To find successive nodes, I just examine where I can possibly move the "hole" in the puzzle. There are 3 neighbors for the puzzle state (aka node) that is displayed: the hole can move north, west, or east. My A* search sometimes converges to a solution in 20s, sometimes 180s, and sometimes doesn't converge at all (waited 10 mins or more). I think h is reasonable. I'm wondering if I've modeled g properly. In other words, is it possible that my A* function is reaching a node in the graph via a path that is not the shortest path? Maybe have I not waited long enough? Maybe 10 minutes is not long enough? For a fully random arrangement, (assuming no parity problems), What is the average number of permutations an A* solution will examine? (please show the math) I'm going to look for logic errors in my code, but in the meantime, Any tips? (ps: it's done in Javascript). Also, no, this isn't CompSci homework. It's just a personal exploration thing. I'm just trying to learn Javascript. EDIT: I've found that the run-time is highly depend upon the heuristic. I saw the 10x factor applied to the heuristic from the article someone mentioned, and it made me wonder - why 10x? Why linear? Because this is done in javascript, I could modify the code to dynamically update an html table with the node currently being considered. This allowd me to peek at the algorithm as it was progressing. With a regular taxicab distance heuristic, I watched as it failed to converge. There were 5's and 12's in the top row, and they kept hanging around. I'd see 1,2,3,4 creep into the top row, but then they'd drop out, and other numbers would move up there. What I was hoping to see was 1,2,3,4 sort of creeping up to the top, and then staying there. I thought to myself - this is not the way I solve this personally. Doing this manually, I solve the top row, then the 2ne row, then the 3rd and 4th rows sort of concurrently. So I tweaked the h(x) function to more heavily weight the higher rows and the "lefter" columns. The result was that the A* converged much more quickly. It now runs in 3 minutes instead of "indefinitely". With the "peek" I talked about, I can see the smaller numbers creep up to the higher rows and stay there. Not only does this seem like the right thing, it runs much faster. I'm in the process of trying a bunch of variations. It seems pretty clear that A* runtime is very sensitive to the heuristic. Currently the best heuristic I've found uses the summation of dislocation * ((4-i) + (4-j)) where i and j are the row and column, and dislocation is the taxicab distance. One interesting part of the result I got: with a particular heuristic I find a path very quickly, but it is obviously not the shortest path. I think this is because I am weighting the heuristic. In one case I got a path of 178 steps in 10s. My own manual effort produce a solution in 87 moves. (much more than 10s). More investigation warranted. So the result is I am seeing it converge must faster, and the path is definitely not the shortest. I have to think about this more. Code: var stop = false; function Astar(start, goal, callback) { // start and goal are nodes in the graph, represented by // an array of 16 ints. The goal is: [1,2,3,...14,15,0] // Zero represents the hole. // callback is a method to call when finished. This runs a long time, // therefore we need to use setTimeout() to break it up, to avoid // the browser warning like "Stop running this script?" // g is the actual distance traveled from initial node to current node. // h is the heuristic estimate of distance from current to goal. stop = false; start.g = start.dontgo = 0; // calcHeuristic inserts an .h member into the array calcHeuristicDistance(start); // start the stack with one element var closed = []; // set of nodes already evaluated. var open = [ start ]; // set of nodes to evaluate (start with initial node) var iteration = function() { if (open.length==0) { // no more nodes. Fail. callback(null); return; } var current = open.shift(); // get highest priority node // update the browser with a table representation of the // node being evaluated $("#solution").html(stateToString(current)); // check solution returns true if current == goal if (checkSolution(current,goal)) { // reconstructPath just records the position of the hole // through each node var path= reconstructPath(start,current); callback(path); return; } closed.push(current); // get the set of neighbors. This is 3 or fewer nodes. // (nextStates is optimized to NOT turn directly back on itself) var neighbors = nextStates(current, goal); for (var i=0; i<neighbors.length; i++) { var n = neighbors[i]; // skip this one if we've already visited it if (closed.containsNode(n)) continue; // .g, .h, and .previous get assigned implicitly when // calculating neighbors. n.g is nothing more than // current.g+1 ; // add to the open list if (!open.containsNode(n)) { // slot into the list, in priority order (minimum f first) open.priorityPush(n); n.previous = current; } } if (stop) { callback(null); return; } setTimeout(iteration, 1); }; // kick off the first iteration iteration(); return null; }

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >