Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 99/300 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Finding the largest subtree in a BST

    - by rakeshr
    Given a binary tree, I want to find out the largest subtree which is a BST in it. Naive approach: I have a naive approach in mind where I visit every node of the tree and pass this node to a isBST function. I will also keep track of the number of nodes in a sub-tree if it is a BST. Is there a better approach than this ?

    Read the article

  • Clamping a vector to a minimum and maximum?

    - by user146780
    I came accross this: t = Clamp(t/d, 0, 1) but I'm not sure how to perform this operation on a vector. What are the steps to clamp a vector if one was writing their own vector implementation? Thanks clamp clamping a vector to a minimum and a maximum ex: pc = # the point you are coloring now p0 = # start point p1 = # end point v = p1 - p0 d = Length(v) v = Normalize(v) # or Scale(v, 1/d) v0 = pc - p0 t = Dot(v0, v) t = Clamp(t/d, 0, 1) color = (start_color * t) + (end_color * (1 - t))

    Read the article

  • question about api functions

    - by davit-datuashvili
    i have question we have API functions in java can user create it's own function and add to his java IDE? for example i am using netbeans can i create my own function add to netbean IDE?let say create binary function or something else thanks

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • question bitonic sequence

    - by davit-datuashvili
    A sequence is bitonic if it monotonically increases and then monotonically de- creases, or if it can be circularly shifted to monotonically increase and then monotonically decrease. For example the sequences 1, 4, 6, 8, 3, -2 , 9, 2, -4, -10, -5 , and 1, 2, 3, 4 are bitonic, but 1, 3, 12, 4, 2, 10 is not bitonic. please help me to determine if given sequence is bitonic?

    Read the article

  • k-combinations of a set of integers in ascending size order

    - by Adamski
    Programming challenge: Given a set of integers [1, 2, 3, 4, 5] I would like to generate all possible k-combinations in ascending size order in Java; e.g. [1], [2], [3], [4], [5], [1, 2], [1, 3] ... [1, 2, 3, 4, 5] It is fairly easy to produce a recursive solution that generates all combinations and then sort them afterwards but I imagine there's a more efficient way that removes the need for the additional sort.

    Read the article

  • Position elements without overlap

    - by eWolf
    I have a number of rectangular elements that I want to position in a 2D space. I calculate an ideal position for each element. Now my problem is that many elements overlap as very often the ideal positions are concentrated in one region. I want to avoid overlap as much as possible (doesn't have to be perfect, though). How can I do this? I've heard physics simulations are suitable for this - is that correct? And can anyone provide an example/tutorial? By the way: I'm using XNA, if you know any .NET library that does exactly this job - tell me!

    Read the article

  • Travelling Salesman Problem Constraint Representation

    - by alex25
    Hey! I read a couple of articles and sample code about how to solve TSP with Genetic Algorithms and Ant Colony Optimization etc. But everything I found didn't include time (window) constraints, eg. "I have to be at customer x before 12am)" and assumed symmetry. Can somebody point me into the direction of some sample code or articles that explain how I can add constraints to TSP and how I can represent those in code. Thanks!

    Read the article

  • Programming Contest Question: Counting Polyominos

    - by Martijn Courteaux
    Hi, An example question for a programming contest was to write a program that finds out how much polyominos are possible with a given number of stones. So for two stones (n = 2) there is only one polyominos: XX You might think this is a second solution: X X But it isn't. The polyominos are not unique if you can rotate them. So, for 4 stones (n = 4), there are 7 solutions: X X XX X X X X X X XX X XX XX XX X X X XX X X XX The application has to be able to find the solution for 1 <= n <=10 PS: Using the list of polyominos on Wikipedia isn't allowed ;) EDIT: Of course the question is: How to do this in Java, C/C++, C#

    Read the article

  • hi question about mathematics

    - by davit-datuashvili
    hi i have one question i know site mathoverflow.com and have posted question but unfortunately no one give me answer if i post here can anybody help me? it is not homework because i know somebody will say it is homework what u have tried but this is not so i dont know how solve please if it is possible i will post here ok?

    Read the article

  • Inverse relationship of two variables

    - by Jam
    this one is maybe pretty stupid.. Or I am just exhausted or something, but I just cant seem to solve it.. Problem : two variables X and Y, value of Y is dependent on value of X. X can have values ranging from some value to some value (lets say from 0 to 250) and y can have different values (lets say from 0.1 to 1.0 or something..) - but it is inverse relatonship (what I mean is: if value of X is e.g. 250, then value of Y would be 0.1 and when X decreases up to 0, value of Y raises up to 1.0.. So how should I do it? lets say I have function: -- double computeValue (double X) { /computation/ return Y; } Also, is there some easy way to somehow make the scaling of the function not so linear? - For example when X raises, Y decreases slower at first but then more rapidly in the end.. (rly dont know how to say it but I hope you guys got it) Thanks in advance for this stupid question :/

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • How does Amazon's Statistically Improbable Phrases work?

    - by ??iu
    How does something like Statistically Improbable Phrases work? According to amazon: Amazon.com's Statistically Improbable Phrases, or "SIPs", are the most distinctive phrases in the text of books in the Search Inside!™ program. To identify SIPs, our computers scan the text of all books in the Search Inside! program. If they find a phrase that occurs a large number of times in a particular book relative to all Search Inside! books, that phrase is a SIP in that book. SIPs are not necessarily improbable within a particular book, but they are improbable relative to all books in Search Inside!. For example, most SIPs for a book on taxes are tax related. But because we display SIPs in order of their improbability score, the first SIPs will be on tax topics that this book mentions more often than other tax books. For works of fiction, SIPs tend to be distinctive word combinations that often hint at important plot elements. For instance, for Joel's first book, the SIPs are: leaky abstractions, antialiased text, own dog food, bug count, daily builds, bug database, software schedules One interesting complication is that these are phrases of either 2 or 3 words. This makes things a little more interesting because these phrases can overlap with or contain each other.

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • make tree in scheme

    - by ???
    (define (entry tree) (car tree)) (define (left-branch tree) (cadr tree)) (define (right-branch tree) (caddr tree)) (define (make-tree entry left right) (list entry left right)) (define (mktree order items_list) (cond ((= (length items_list) 1) (make-tree (car items_list) '() '())) (else (insert2 order (car items_list) (mktree order (cdr items_list)))))) (define (insert2 order x t) (cond ((null? t) (make-tree x '() '())) ((order x (entry t)) (make-tree (entry t) (insert2 order x (left-branch t)) (right-branch t))) ((order (entry t) x ) (make-tree (entry t) (left-branch t) (insert2 order x (right-branch t)))) (else t))) The result is: (mktree (lambda (x y) (< x y)) (list 7 3 5 1 9 11)) (11 (9 (1 () (5 (3 () ()) (7 () ()))) ()) ()) But I'm trying to get: (7 (3 (1 () ()) (5 () ())) (9 () (11 () ()))) Where is the problem?

    Read the article

  • How can I group an array of rectangles into "Islands" of connected regions?

    - by Eric
    The problem I have an array of java.awt.Rectangles. For those who are not familiar with this class, the important piece of information is that they provide an .intersects(Rectangle b) function. I would like to write a function that takes this array of Rectangles, and breaks it up into groups of connected rectangles. Lets say for example, that these are my rectangles (constructor takes the arguments x, y, width,height): Rectangle[] rects = new Rectangle[] { new Rectangle(0, 0, 4, 2), //A new Rectangle(1, 1, 2, 4), //B new Rectangle(0, 4, 8, 2), //C new Rectangle(6, 0, 2, 2) //D } A quick drawing shows that A intersects B and B intersects C. D intersects nothing. A tediously drawn piece of ascii art does the job too: +-------+ +---+ ¦A+---+ ¦ ¦ D ¦ +-+---+-+ +---+ ¦ B ¦ +-+---+---------+ ¦ +---+ C ¦ +---------------+ Therefore, the output of my function should be: new Rectangle[][]{ new Rectangle[] {A,B,C}, new Rectangle[] {D} } The failed code This was my attempt at solving the problem: public List<Rectangle> getIntersections(ArrayList<Rectangle> list, Rectangle r) { List<Rectangle> intersections = new ArrayList<Rectangle>(); for(Rectangle rect : list) { if(r.intersects(rect)) { list.remove(rect); intersections.add(rect); intersections.addAll(getIntersections(list, rect)); } } return intersections; } public List<List<Rectangle>> mergeIntersectingRects(Rectangle... rectArray) { List<Rectangle> allRects = new ArrayList<Rectangle>(rectArray); List<List<Rectangle>> groups = new ArrayList<ArrayList<Rectangle>>(); for(Rectangle rect : allRects) { allRects.remove(rect); ArrayList<Rectangle> group = getIntersections(allRects, rect); group.add(rect); groups.add(group); } return groups; } Unfortunately, there seems to be an infinite recursion loop going on here. My uneducated guess would be that java does not like me doing this: for(Rectangle rect : allRects) { allRects.remove(rect); //... } Can anyone shed some light on the issue?

    Read the article

  • Is there a name for the technique of using base-2 numbers to encode a list of unique options?

    - by Lunatik
    Apologies for the rather vague nature of this question, I've never been taught programming and Google is rather useless to a self-help guy like me in this case as the key words are pretty ambiguous. I am writing a couple of functions that encode and decode a list of options into a Long so they can easily be passed around the application, you know this kind of thing: 1 - Apple 2 - Orange 4 - Banana 8 - Plum etc. In this case the number 11 would represent Apple, Orange & Plum. I've got it working but I see this used all the time so assume there is a common name for the technique, and no doubt all sorts of best practice and clever algorithms that are at the moment just out of my reach.

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Constructing colours for maximum contrast

    - by Martin
    I want to draw some items on screen, each item is in one of N sets. The number of sets changes all the time, so I need to calculate N different colours which are as different as possible (to make it easy to identify what is in which set). So, for example with N = 2 my results would be black and white. With three I guess I would get all red, all green, all blue. For all four, it's less obvious what the correct answer is, and this is where I'm having trouble.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >