Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 28/300 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • How does one unit test an algorithm

    - by Asa Baylus
    I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: "unit tests!". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding.

    Read the article

  • "Bad apple" algorithm, or process crashes shared sandbox

    - by Roger Lipscombe
    I'm looking for an algorithm to handle the following problem, which I'm (for now) calling the "bad apple" algorithm. The problem I've got a N processes running in M sandboxes, where N M. It's impractical to give each process its own sandbox. At least one of those processes is badly-behaved, and is bringing down the entire sandbox, thus killing all of the other processes. If it was a single badly-behaved process, then I could use a simple bisection to put half of the processes in one sandbox, and half in another sandbox, until I found the miscreant. This could probably be extended by partitioning the set into more than two pieces until the badly-behaved process was found. For example, partitioning into 8 sets allows me to eliminate 7/8 of the search space at each step, and so on. The question If more than one process is badly-behaved -- including the possibility that they're all badly-behaved -- does this naive algorithm "work"? Is it guaranteed to work within some sensible bounds?

    Read the article

  • Nearest color algorithm using Hex Triplet

    - by Lijo
    Following page list colors with names http://en.wikipedia.org/wiki/List_of_colors. For example #5D8AA8 Hex Triplet is "Air Force Blue". This information will be stored in a databse table (tbl_Color (HexTriplet,ColorName)) in my system Suppose I created a color with #5D8AA7 Hex Triplet. I need to get the nearest color available in the tbl_Color table. The expected anaser is "#5D8AA8 - Air Force Blue". This is because #5D8AA8 is the nearest color for #5D8AA7. Do we have any algorithm for finding the nearest color? How to write it using C# / Java? REFERENCE http://stackoverflow.com/questions/5440051/algorithm-for-parsing-hex-into-color-family http://stackoverflow.com/questions/6130621/algorithm-for-finding-the-color-between-two-others-in-the-colorspace-of-painte Suggested Formula: Suggested by @user281377. Choose the color where the sum of those squared differences is minimal (Square(Red(source)-Red(target))) + (Square(Green(source)-Green(target))) +(Square(Blue(source)-Blue(target)))

    Read the article

  • How to find 2D grid cells swept by a moving circle?

    - by Nevermind
    I'm making a game based on a 2D grid, with some cells passable and some not. Dynamic objects can move continuously, independent of the grid, but need to collide with impassable cells. I wrote an algorithm to trace a ray against the grid, that gives me all cells that ray intersects. However, actual object are not point-sized; I'm currently representing them as circles. But I can't figure out an effective algorithm to trace a moving circle. Here's a picture of what I need: The numbers show in what order the circle collides with grid cells. Does anybody know the algorithm to find these collisions? Preferably in C#. Update The circle can be bigger than a single grid cell.

    Read the article

  • about Master theorem

    - by matin1234
    Hi this is the link http://www.cs.mcgill.ca/~cs251/OldCourses/1997/topic5/ is written that for T(n)<=2n+T(n/3)+T(n/3) the T(n) is not O(n) but with master theorem we can use case 3 and we can say that its T(n) is theta(n) please help me! thanks how can we prove that T(n) is not O(n)

    Read the article

  • What is the optimum way to select the most dissimilar individuals from a population?

    - by Aaron D
    I have tried to use k-means clustering to select the most diverse markers in my population, for example, if we want to select 100 lines I cluster the whole population to 100 clusters then select the closest marker to the centroid from each cluster. The problem with my solution is it takes too much time (probably my function needs optimization), especially when the number of markers exceeds 100000. So, I will appreciate it so much if anyone can show me a new way to select markers that maximize diversity in my population and/or help me optimize my function to make it work faster. Thank you # example: library(BLR) data(wheat) dim(X) mdf<-mostdiff(t(X), 100,1,nstart=1000) Here is the mostdiff function that i used: mostdiff <- function(markers, nClust, nMrkPerClust, nstart=1000) { transposedMarkers <- as.array(markers) mrkClust <- kmeans(transposedMarkers, nClust, nstart=nstart) save(mrkClust, file="markerCluster.Rdata") # within clusters, pick the markers that are closest to the cluster centroid # turn the vector of which markers belong to which clusters into a list nClust long # each element of the list is a vector of the markers in that cluster clustersToList <- function(nClust, clusters) { vecOfCluster <- function(whichClust, clusters) { return(which(whichClust == clusters)) } return(apply(as.array(1:nClust), 1, vecOfCluster, clusters)) } pickCloseToCenter <- function(vecOfCluster, whichClust, transposedMarkers, centers, pickHowMany) { clustSize <- length(vecOfCluster) # if there are fewer than three markers, the center is equally distant from all so don't bother if (clustSize < 3) return(vecOfCluster[1:min(pickHowMany, clustSize)]) # figure out the distance (squared) between each marker in the cluster and the cluster center distToCenter <- function(marker, center){ diff <- center - marker return(sum(diff*diff)) } dists <- apply(transposedMarkers[vecOfCluster,], 1, distToCenter, center=centers[whichClust,]) return(vecOfCluster[order(dists)[1:min(pickHowMany, clustSize)]]) } }

    Read the article

  • Time complexity of a powerset generating function

    - by Lirik
    I'm trying to figure out the time complexity of a function that I wrote (it generates a power set for a given string): public static HashSet<string> GeneratePowerSet(string input) { HashSet<string> powerSet = new HashSet<string>(); if (string.IsNullOrEmpty(input)) return powerSet; int powSetSize = (int)Math.Pow(2.0, (double)input.Length); // Start at 1 to skip the empty string case for (int i = 1; i < powSetSize; i++) { string str = Convert.ToString(i, 2); string pset = str; for (int k = str.Length; k < input.Length; k++) { pset = "0" + pset; } string set = string.Empty; for (int j = 0; j < pset.Length; j++) { if (pset[j] == '1') { set = string.Concat(set, input[j].ToString()); } } powerSet.Add(set); } return powerSet; } So my attempt is this: let the size of the input string be n in the outer for loop, must iterate 2^n times (because the set size is 2^n). in the inner for loop, we must iterate 2*n times (at worst). 1. So Big-O would be O((2^n)*n) (since we drop the constant 2)... is that correct? And n*(2^n) is worse than n^2. if n = 4 then (4*(2^4)) = 64 (4^2) = 16 if n = 100 then (10*(2^10)) = 10240 (10^2) = 100 2. Is there a faster way to generate a power set, or is this about optimal?

    Read the article

  • How to prove worst-case number of inversions in a heap is O(nlogn)?

    - by Jacques
    I am busy preparing for exams, just doing some old exam papers. The question below is the only one I can't seem to do (I don't really know where to start). Any help would be appreciated greatly. Use the O(nlogn) comparison sort bound, the theta(n) bound for bottom-up heap construction, and the order complexity if insertion sort to show that the worst-case number of inversions in a heap is O(nlogn).

    Read the article

  • c++ ide & tools with clang integration

    - by lurscher
    recently i read this blog about google integrating clang parser into their code analysis tools This is something in which c++ is at least a decade behind other languages like java, but now that llvm-clang is almost c++ iso-ready, i think its possible for c++ code analysis tools to begin using the c++ parser effectively, since it has been designed from the ground up precisely for this so i'm wondering if there are existing open source or known commercial projects taking this path, integrating with clang to provide higher-level analysis tools?

    Read the article

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language. I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research! In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

    Read the article

  • Bracketing algorithm when root finding. Single root in "quadratic" function

    - by Ander Biguri
    I am trying to implement a root finding algorithm. I am using the hybrid Newton-Raphson algorithm found in numerical recipes that works pretty nicely. But I have a problem in bracketing the root. While implementing the root finding algorithm I realised that in several cases my functions have 1 real root and all the other imaginary (several of them, usually 6 or 9). The only root I am interested is in the real one so the problem is not there. The thing is that the function approaches the root like a cubic function, touching with the point the y=0 axis... Newton-Rapson method needs some brackets of different sign and all the bracketing methods I found don't work for this specific case. What can I do? It is pretty important to find that root in my program... EDIT: more problems: sometimes due to reaaaaaally small numerical errors, say a variation of 1e-6 in some value the "cubic" function does NOT have that real root, it is just imaginary with a neglectable imaginary part... (checked with matlab) EDIT 2: Much more information about the problem. Ok, I need root finding algorithm. Info I have: The root I need to find is between [0-1] , if there are more roots outside that part I am not interested in them. The root is real, there may be imaginary roots, but I don't want them. Probably all the rest of the roots will be imaginary The root may be double in that point, but I think that actually doesn't mater in numerical analysis problems I need to use the root finding algorithm several times during the overall calculations, but the function will always be a polynomial In one of the particular cases of the root finding, my polynomial will be similar to a quadratic function that touches Y=0 with the point. Example of a real case: The coefficient may not be 100% precise and that really slight imprecision may make the function not to touch the Y=0 axis. I cannot solve for this specific case because in other cases it may be that the polynomial is pretty normal and doesn't make any "strange" thing. The method I am actually using is NewtonRaphson hybrid, where if the derivative is really small it makes a bisection instead of NewRaph (found in numerical recipes). Matlab's answer to the function on the image: roots: 0.853553390593276 + 0.353553390593278i 0.853553390593276 - 0.353553390593278i 0.146446609406726 + 0.353553390593273i 0.146446609406726 - 0.353553390593273i 0.499999999999996 + 0.000000040142134i 0.499999999999996 - 0.000000040142134i The function is a real example I prepared where I know that the answer I want is 0.5 Note: I still haven't check completely some of the answers I you people have give me (Thank you!), I am just trying to give al the information I already have to complete the question.

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of UNIQUE N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of DISSIMILAR N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] # Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • algorithm for project euler problem no 18

    - by Valentino Ru
    Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if the "Traversor" is greedy, meaning that he always choosed the child with be higher value the maximum of every single walkthrough is asked The NOTE says, that it is possible to solve this problem by trying every route. This means to me, that is is also possible without! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }

    Read the article

  • Pathfinding in Warcraft 1

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-)

    Read the article

  • Labeling algorithm for points

    - by Qwertie
    I need an algorithm to place horizontal text labels for multiple series of points on the screen (basically I need to show timestamps and other information for a history of moving objects on a map; in general there are multiple data points per object). The text labels should appear close to their points--above, below, or on the right side--but should not overlap other points or text labels. Does anyone know an algorithm/heuristic for this?

    Read the article

  • Algorithm to distribute objects in a box (like InDesign, Illustrator, Draw!)

    - by Rafael Almeida
    I have a set of rectangles with their corresponding positions and a big rectangle which serves as the 'bounding box' for these rectangles. I would like to know of an algorithm that would 'distribute the free space' evenly among the rectangles. Some of you may be familiar with the Distribute Spacing option in Adobe InDesign and similar layout-oriented apps. That would be what I'm looking for. I did try looking it up, but I'm not familiar with 'graphical' algorithms terminology and trying only terms relating to 'distribute' mainly yields results about Distributed Computing. So, even the names of the algorithms or better terms to look up would be a big help. Finally, the algorithm doesn't need to be rigorously the same as InDesign's one: pretty much any algorithm that 'distributes' objects inside a region will work fine. In fact, since I'm striving for visual appeal mainly, the more suggestions the better. =D

    Read the article

  • Crypto-Analysis of keylogger logs and config file. Possible?

    - by lost.
    Is there anyway Encryption on an unidentified file can be broken(file in question: config file and log files from ardamax keylogger). These files date back all the way to 2008. I searched everywhere, nothing on slashdot, nothing on google. Ardamax Keyviewer? Should I just write to Ardamax? I am at a loss of what to do. I feel comprimised. Anyone managed to decrpyt files with Crypto-analysis? More Information-- There are log files in the folder and a configuration file, "akv.cfg". Is it possible to decrypt the files and maybe getting the attackers email address used to receive the keylogger logs? I've Checked ardamax.com. They have an built-in log viewer. But its unavailable for download. If superuser isn't the proper place to ask, know where I might get help?

    Read the article

  • Installing Sql Server 2005 SP2 - Getting error on analysis services component

    - by Greg_the_Ant
    At first many of the components didn't install and I followed this workaround (fixing user/SID mappings in registry.) After that everything installed successfully except for analysis services. I am getting the exact same error message as before on analysis services. (Are there other users installed by sql server I'm not aware of perhaps?) Do you guys have any ideas? All of my searches seem to just point to that workaround above that I already did. Error message from log: Product : Analysis Services (MSSQLSERVER) Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\OLAP9_Hotfix_KB921896_sqlrun_as.msp.log Error Number : 29528 Error Description : MSP Error: 29528 The setup has encountered an unexpected error while Setting Internal Properties. The error is: Fatal error during installation.

    Read the article

  • Writing algorithm on 2D data set in plain english

    - by Alexandre P. Levasseur
    I have started an introductory Java class and the material is absolutely horrendous and I have to get excellent grades to be accepted into the master's degree, hence my very beginner question: In my assignment I have to write algorithms (no pseudo-code yet) to solve a board game (Sudoku). Essentially, the notes say that an algorithm is specification of the input(s), the output(s) and the treatments applied to the input to get the output. My question lies on the wording of algorithms because I could probably code it but I can't seem to put it on paper in a coherent way. The game has a 9x9 board and one of the algorithms to write has to find the solution by looking at 3 squares (either horizontal or vertical) and see if one of the three sub-squares match the number you are looking for. If none match then the number you are looking to place is in one of the other 2 set of 3 sub-squares (see image to get a better idea). I really can't get my head around how to formulate the solution into the terms described above or maybe it's just too simple, here's what I was thinking: Input: A 2-dimensional set of data of size 9 by 9 to be solved and a number to search for. Ouput: A 2-dimensional set of data of size 9 by 9 either solved or partially solved. Treatment: Scan each set of 3x9 and 9x3 squares. For each line or column of a 3x3 square check if the number matches a line (or column). If it does then move to the next line (or column). If not then proceed to the next 3x3 square in the same line (or column). Rinse and repeat. Does that make sense as an algorithm written in plain english ? I'm not looking for an answer to the algorithm per se but rather on the formulation of algorithms in plain english.

    Read the article

  • Algorithm to infer tag hierarchy

    - by Tom
    I'm looking for an algorithm to infer a hierarchy from a set of tagged items. E.g. if the following items have the tags: 1 a 2 a,b 3 a,c 4 a,c,e 5 a,b 6 a,c 7 d 8 d,f Then I can construct an undirected graph (or graphs) by tallying the node weights and edge weights: node weights edge weights a 6 a-b 2 b 2 a-c 3 c 3 c-e 1 d 2 a-e 1 <-- this edge is parallel to a-c and c-e and not wanted e 1 d-f 1 f 1 The first problem is how to drop any redundant edges to get to the simplified graph? Note that it's only appropriate to remove that redundant a-e edge in this case because something is tagged as a-c-e, if that wasn't the case and the tag was a-e, that edge would have to remain. I suspect that means the removal of edges can only happen during the construction of the graph, not after everything has been tallied up. What I'd then like to do is identify the direction of the edges to create a directed graph (or graphs) and pick out root nodes to hopefully create a tree (or trees): trees a d // \\ | b c f \ e It seems like it could be a string algorithm - longest common subsequences/prefixes - or a tree/graph algorithm, but I am a little stuck since I don't know the correct terminology to search for it.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >