Search Results

Search found 5298 results on 212 pages for 'marching cubes algorithm'.

Page 83/212 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Clamping a vector to a minimum and maximum?

    - by user146780
    I came accross this: t = Clamp(t/d, 0, 1) but I'm not sure how to perform this operation on a vector. What are the steps to clamp a vector if one was writing their own vector implementation? Thanks clamp clamping a vector to a minimum and a maximum ex: pc = # the point you are coloring now p0 = # start point p1 = # end point v = p1 - p0 d = Length(v) v = Normalize(v) # or Scale(v, 1/d) v0 = pc - p0 t = Dot(v0, v) t = Clamp(t/d, 0, 1) color = (start_color * t) + (end_color * (1 - t))

    Read the article

  • Position elements without overlap

    - by eWolf
    I have a number of rectangular elements that I want to position in a 2D space. I calculate an ideal position for each element. Now my problem is that many elements overlap as very often the ideal positions are concentrated in one region. I want to avoid overlap as much as possible (doesn't have to be perfect, though). How can I do this? I've heard physics simulations are suitable for this - is that correct? And can anyone provide an example/tutorial? By the way: I'm using XNA, if you know any .NET library that does exactly this job - tell me!

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • R: Forecast package: Automatic algorithm for composite model involving ETS and AR

    - by phanikishan
    Hey, I would like to write a code involving automatic selection of a best composite model using ETS as well as autoregressive models. What is the criteria I should base my selection on? Also if I'm using the auto.arima function for deducing number of AR terms and corresponding coefficients from the forecast package in R, does my input series necessarily have to be stationary? or the value for d would be automatically selected thus returning a non-stationary model? Thanks, Phani

    Read the article

  • Dot Game and Dynamic Programming

    - by Albert Diego
    I'm trying to solve a variant of the dot game with dynamic programming. The regular dot game is played with a line of dots. Each player takes either one or two dots at their respective end of the line and the person who is left with no dots to take wins. In this version of the game, each dot has a different value. Each player takes alternate turns and takes either dot at either end of the line. I want to come up with a way to use dynamic programming to find the max amount that the first player is guaranteed to win. I'm having problems grasping my head around this and trying to write a recurrence for the solution. Any help is appreciated, thanks!

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Finding the largest subtree in a BST

    - by rakeshr
    Given a binary tree, I want to find out the largest subtree which is a BST in it. Naive approach: I have a naive approach in mind where I visit every node of the tree and pass this node to a isBST function. I will also keep track of the number of nodes in a sub-tree if it is a BST. Is there a better approach than this ?

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Solving simultaneous equations

    - by Milo
    Here is my problem: Given x, y, z and ratio where z is known and ratio is known and is a float representing a relative value, I need to find x and y. I know that: x / y == ratio y - x == z What I'm trying to do is make my own scroll pane and I'm figuring out the scrollbar parameters. So for example, If the scrollbar must be able to scroll 100 values (z) and the thumb must consume 80% of the bar (ratio = 0.8) then x would be 400 and y would be 500. Thanks

    Read the article

  • question bitonic sequence

    - by davit-datuashvili
    A sequence is bitonic if it monotonically increases and then monotonically de- creases, or if it can be circularly shifted to monotonically increase and then monotonically decrease. For example the sequences 1, 4, 6, 8, 3, -2 , 9, 2, -4, -10, -5 , and 1, 2, 3, 4 are bitonic, but 1, 3, 12, 4, 2, 10 is not bitonic. please help me to determine if given sequence is bitonic?

    Read the article

  • Way to store a large dictionary with low memory footprint + fast lookups (on Android)

    - by BobbyJim
    I'm developing an android word game app that needs a large (~250,000 word dictionary) available. I need: reasonably fast look ups e.g. constant time preferable, need to do maybe 200 lookups a second on occasion to solve a word puzzle and maybe 20 lookups within 0.2 second more often to check words the user just spelled. EDIT: Lookups are typically asking "Is in the dictionary?". I'd like to support up to two wildcards in the word as well, but this is easy enough by just generating all possible letters the wildcards could have been and checking the generated words (i.e. 26 * 26 lookups for a word with two wildcards). as it's a mobile app, using as little memory as possible and requiring only a small initial download for the dictionary data is top priority. My first naive attempts used Java's HashMap class, which caused an out of memory exception. I've looked into using the SQL lite databases available on android, but this seems like overkill. What's a good way to do what I need?

    Read the article

  • Inverse relationship of two variables

    - by Jam
    this one is maybe pretty stupid.. Or I am just exhausted or something, but I just cant seem to solve it.. Problem : two variables X and Y, value of Y is dependent on value of X. X can have values ranging from some value to some value (lets say from 0 to 250) and y can have different values (lets say from 0.1 to 1.0 or something..) - but it is inverse relatonship (what I mean is: if value of X is e.g. 250, then value of Y would be 0.1 and when X decreases up to 0, value of Y raises up to 1.0.. So how should I do it? lets say I have function: -- double computeValue (double X) { /computation/ return Y; } Also, is there some easy way to somehow make the scaling of the function not so linear? - For example when X raises, Y decreases slower at first but then more rapidly in the end.. (rly dont know how to say it but I hope you guys got it) Thanks in advance for this stupid question :/

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • BFS algorithm problem

    - by Gorkamorka
    The problem is as follows: A wanderer begins on the grid coordinates (x,y) and wants to reach the coordinates (0,0). From every gridpoint, the wanderer can go 8 steps north OR 3 steps south OR 5 steps east OR 6 steps west (8N/3S/5E/6W). How can I find the shortest route from (X,Y) to (0,0) using breadth-first search? Clarifications: Unlimited grid Negative coordinates are allowed A queue (linked list or array) must be used No obstacles present

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Compact data structure for storing a large set of integral values

    - by Odrade
    I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effectively random over this range. The operations I need on the set are dirt simple: Add a new value Iterate over all of the values. There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives. I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions. EDIT: To answer your questions: No, the items don't need to be sorted By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this. There could be a decent number of these sets in memory at once (~100).

    Read the article

  • how to elegantly duplicate a graph (neural network)

    - by macias
    I have a graph (network) which consists of layers, which contains nodes (neurons). I would like to write a procedure to duplicate entire graph in most elegant way possible -- i.e. with minimal or no overhead added to the structure of the node or layer. Or yet in other words -- the procedure could be complex, but the complexity should not "leak" to structures. They should be no complex just because they are copyable. I wrote the code in C#, so far it looks like this: neuron has additional field -- copy_of which is pointer the the neuron which base copied from, this is my additional overhead neuron has parameterless method Clone() neuron has method Reconnect() -- which exchanges connection from "source" neuron (parameter) to "target" neuron (parameter) layer has parameterless method Clone() -- it simply call Clone() for all neurons network has parameterless method Clone() -- it calls Clone() for every layer and then it iterates over all neurons and creates mappings neuron=copy_of and then calls Reconnect to exchange all the "wiring" I hope my approach is clear. The question is -- is there more elegant method, I particularly don't like keeping extra pointer in neuron class just in case of being copied! I would like to gather the data in one point (network's Clone) and then dispose it completely (Clone method cannot have an argument though).

    Read the article

  • hi question about mathematics

    - by davit-datuashvili
    hi i have one question i know site mathoverflow.com and have posted question but unfortunately no one give me answer if i post here can anybody help me? it is not homework because i know somebody will say it is homework what u have tried but this is not so i dont know how solve please if it is possible i will post here ok?

    Read the article

  • make tree in scheme

    - by ???
    (define (entry tree) (car tree)) (define (left-branch tree) (cadr tree)) (define (right-branch tree) (caddr tree)) (define (make-tree entry left right) (list entry left right)) (define (mktree order items_list) (cond ((= (length items_list) 1) (make-tree (car items_list) '() '())) (else (insert2 order (car items_list) (mktree order (cdr items_list)))))) (define (insert2 order x t) (cond ((null? t) (make-tree x '() '())) ((order x (entry t)) (make-tree (entry t) (insert2 order x (left-branch t)) (right-branch t))) ((order (entry t) x ) (make-tree (entry t) (left-branch t) (insert2 order x (right-branch t)))) (else t))) The result is: (mktree (lambda (x y) (< x y)) (list 7 3 5 1 9 11)) (11 (9 (1 () (5 (3 () ()) (7 () ()))) ()) ()) But I'm trying to get: (7 (3 (1 () ()) (5 () ())) (9 () (11 () ()))) Where is the problem?

    Read the article

  • Travelling Salesman Problem Constraint Representation

    - by alex25
    Hey! I read a couple of articles and sample code about how to solve TSP with Genetic Algorithms and Ant Colony Optimization etc. But everything I found didn't include time (window) constraints, eg. "I have to be at customer x before 12am)" and assumed symmetry. Can somebody point me into the direction of some sample code or articles that explain how I can add constraints to TSP and how I can represent those in code. Thanks!

    Read the article

  • Five unique, random numbers from a subset

    - by tau
    I know similar questions come up a lot and there's probably no definitive answer, but I want to generate five unique random numbers from a subset of numbers that is potentially infinite (maybe 0-20, or 0-1,000,000). The only catch is that I don't want to have to run while loops or fill an array. My current method is to simply generate five random numbers from a subset minus the last five numbers. If any of the numbers match each other, then they go to their respective place at the end of the subset. So if the fourth number matches any other number, it will bet set to the 4th from the last number. Does anyone have a method that is "random enough" and doesn't involve costly loops or arrays? Please keep in mind this a curiosity, not some mission-critical problem. I would appreciate it if everyone didn't post "why are you having this problem?" answers. I am just looking for ideas. Thanks a lot!

    Read the article

  • How to make disconnected closed curves connected by adding a shortest path using MATLAB?

    - by user198729
    bwlabel can be used to get disconnected objects in an image: [L Ne] = bwlabel(image); I want to make the objects(But my target is only the contours(closed curve) of these objects) connected by adding a shortest path where necessary. How do I approach this? UPDATE Or how to dilate the closed curves so that they get connected? How to calculate the shortest path between two disconnected closed curves?

    Read the article

  • Echo mysql results in a loop?

    - by Roy D. Porter
    I am using turn.js to make a book. Every div within the 'deathnote' div becomes a new page. <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page </div> //ends book What I am doing is trying to get 3 'content' (content being a name and cause of death) divs onto 1 page, and then generate a new page. So here is what i want: <div id="deathnote"> //starts book <div style="background-image:url(images/coverpage.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"></div> //creates new page <div style="background-image:url(images/paper.jpg);"> //creates new page but leaves it open <div> CONTENT </div> <div> CONTENT </div> <div> CONTENT </div> </div> //ends the page </div> //ends book Seems simple enough, however the content is data from a MySQL DB, so i have to echo it in using PHP. Here is what i have so far <div id="deathnote"> <div style="background-image:url(images/coverpage.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <div style="background-image:url(images/paper.jpg);"></div> <?php $pagecount = 0; $db = new mysqli('localhost', 'username', 'passw', 'DB'); if($db->connect_errno > 0){ die('Unable to connect to database [' . $db->connect_error . ']'); } $sql = <<<SQL SELECT * FROM `TABLE` SQL; if(!$result = $db->query($sql)){ die('There was an error running the query [' . $db->error . ']'); } //IGNORE ALL OF THE GARBAGE ABOVE. IT IS SIMPLE CONNECTING SCRIPT THAT I KNOW WORKS //THE METHOD I AM HAVING TROUBLE WITH IS BELOW $pagecount = 0; while($row = $result->fetch_assoc()){ //GETS THE VALUE (and makes sure it isn't nothing echo '<div style="background-image:url(images/paper.jpg);">'; //THIS OPENS A NEW PAGE while ($pagecount !== 3) { //KEEPS COUNT OF HOW MUCH CONTENT DIVS IS ON THE PAGE while($row = $result->fetch_assoc()){ //START A CONTENT DIV echo '<div class="content"><div class="name">' . $row['victim'] . '</div><div class="cod">' . $row['cod'] . '</div></div>'; //END A CONTENT DIV $pagecount++; //UP THE PAGE COUNT } } $pagecount=0; //PUT IT BACK TO 0 echo '</div>'; //END PAGE } $db->close(); ?> <div style="background-image:url(images/backpage.jpg);"></div> //BACK PAGE </div> At the moment i seem to be causing and infinite loop so the page won't load. The problem resides within the while loops. Any help is greatly appreciated. Thanks in advance guys. :)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >