Search Results

Search found 5298 results on 212 pages for 'marching cubes algorithm'.

Page 46/212 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • C++ design question on traversing binary trees

    - by user231536
    I have a binary tree T which I would like to copy to another tree. Suppose I have a visit method that gets evaluated at every node: struct visit { virtual void operator() (node* n)=0; }; and I have a visitor algorithm void visitor(node* t, visit& v) { //do a preorder traversal using stack or recursion if (!t) return; v(t); visitor(t->left, v); visitor(t->right, v); } I have 2 questions: I settled on using the functor based approach because I see that boost graph does this (vertex visitors). Also I tend to repeat the same code to traverse the tree and do different things at each node. Is this a good design to get rid of duplicated code? What other alternative designs are there? How do I use this to create a new binary tree from an existing one? I can keep a stack on the visit functor if I want, but it gets tied to the algorithm in visitor. How would I incorporate postorder traversals here ? Another functor class?

    Read the article

  • Bubble Breaker Game Solver better than greedy?

    - by Gregory
    For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game The random (N,M,C) board consists N rows x M columns with C colors The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table. I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times. What other approaches could yield high scores besides the exhaustive search? I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

  • reconstructing a tree from its preorder and postorder lists.

    - by NomeN
    Consider the situation where you have two lists of nodes of which all you know is that one is a representation of a preorder traversal of some tree and the other a representation of a postorder traversal of the same tree. I believe it is possible to reconstruct the tree exactly from these two lists, and I think I have an algorithm to do it, but have not proven it. As this will be a part of a masters project I need to be absolutely certain that it is possible and correct (Mathematically proven). However it will not be the focus of the project, so I was wondering if there is a source out there (i.e. paper or book) I could quote for the proof. (Maybe in TAOCP? anybody know the section possibly?) In short, I need a proven algorithm in a quotable resource that reconstructs a tree from its pre and post order traversals. Note: The tree in question will probably not be binary, or balanced, or anything that would make it too easy. Note2: Using only the preorder or the postorder list would be even better, but I do not think it is possible. Note3: A node can have any amount of children. Note4: I only care about the order of siblings. Left or right does not matter when there is only one child.

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are alot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Written as Pseudo code is perfect. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician.

    Read the article

  • Small-o(n^2) implementation of Polynomial Multiplication

    - by AlanTuring
    I'm having a little trouble with this problem that is listed at the back of my book, i'm currently in the middle of test prep but i can't seem to locate anything regarding this in the book. Anyone got an idea? A real polynomial of degree n is a function of the form f(x)=a(n)x^n+?+a1x+a0, where an,…,a1,a0 are real numbers. In computational situations, such a polynomial is represented by a sequence of its coefficients (a0,a1,…,an). Assuming that any two real numbers can be added/multiplied in O(1) time, design an o(n^2)-time algorithm to compute, given two real polynomials f(x) and g(x) both of degree n, the product h(x)=f(x)g(x). Your algorithm should **not** be based on the Fast Fourier Transform (FFT) technique. Please note it needs to be small-o(n^2), which means it complexity must be sub-quadratic. The obvious solution that i have been finding is indeed the FFT, but of course i can't use that. There is another method that i have found called convolution, where if you take polynomial A to be a signal and polynomial B to be a filter. A passed through B yields a shifted signal that has been "smoothed" by A and the resultant is A*B. This is supposed to work in O(n log n) time. Of course i am completely unsure of implementation. If anyone has any ideas of how to achieve a small-o(n^2) implementation please do share, thanks.

    Read the article

  • Intelligent web features, algorithms (people you may follow, similar to you ...)

    - by hilal
    I have 3 main questions about the algorithms in intelligent web (web 2.0) Here the book I'm reading http://www.amazon.com/Algorithms-Intelligent-Web-Haralambos-Marmanis/dp/1933988665 and I want to learn the algorithms in deeper 1. People You may follow (Twitter) How can one determine the nearest result to my requests ? Data mining? which algorithms? 2. How you’re connected feature (Linkedin) Simply algorithm works like that. It draws the path between two nodes let say between Me and the other person is C. Me - A, B - A connections - C . It is not any brute force algorithms or any other like graph algorithms :) 3. Similar to you (Twitter, Facebook) This algorithms is similar to 1. Does it simply work the max(count) friend in common (facebook) or the max(count) follower in Twitter? or any other algorithms they implement? I think the second part is true because running the loop dict{count, person} for person in contacts: dict.add(count(common(person))) return dict(max) is a silly act in every refreshing page. 4. Did you mean (Google) I know that they may implement it with phonetic algorithm http://en.wikipedia.org/wiki/Phonetic_algorithm simply soundex http://en.wikipedia.org/wiki/Soundex and here is the Google VP of Engineering and CIO Douglas Merrill speak http://www.youtube.com/watch?v=syKY8CrHkck#t=22m03s What about first 3 questions? Any ideas are welcome ! Thanks

    Read the article

  • Calculate total batch upload transfer percent with limited information

    - by GONeale
    Hi there, I have a system which uploads to a server file by file and displays a progress bar on file upload progress, then underneath a second progress bar which I want to indicate percentage of batch complete across all files queued to upload. Information and algorithms I can work out are: Bytes Sent / Total Bytes To Send = First progress bar (eg. 512KB of 1024KB (50%)) That works fine. However supposing I have two other files left to upload, but both file sizes are unknown (as this is only known once the file is about to commence upload, at which point it is compressed and file size is determined) how would I go about making my third progress bar? I didn't think this would be possible as I would need "Total Bytes Sent" / "Total Bytes To Send", to replicate the logic of my first progress bar on a larger scale, however I did get a version working: "Current file number we are on" / "total number of files to send" returning the percentage through the batch, however obviously will not incrementally update and it's pretty crude. So on further thinking I thought if I could incorporate the current file % with this algorithm I could perhaps get the correct progress percentage of my batch's current point. I tried this algorithm, but alas to no such avail (sorry to any math heads, it's probably quite apparent why it won't work) ("Current file number we are on" / "total number of files to send") * ("Bytes Sent" / "Total Bytes To Send") For example I thought I was on the right track when testing with this example: 2/3 (2nd of 3rd file) = 66% (this is right so far) but then when I added * 0.20 (for indicating only 20% of 2nd file has uploaded) we went back to 13%. What I need is only a little over 33%! I did try the inverse at 0.80 and a (2/3 * (2/3 * 0.2)) Can this be done without knowing entire bytes in batch to upload? Please help! Thank you!

    Read the article

  • Spaced repetition (SRS) for learning

    - by Fredrik Johansson
    A client has asked me to add a simple spaced repeition algorithm (SRS) for an onlinebased learning site. But before throwing my self into it, I'd like to discuss it with the community. Basically the site asks the user a bunch of questions (by automatically selecting say 10 out of 100 total questions from a database), and the user gives either a correct or incorrect answer. The users result are then stored in a database, for instance: userid questionid correctlyanswered dateanswered 1 123 0 (no) 2010-01-01 10:00 1 124 1 (yes) 2010-01-01 11:00 1 125 1 (yes) 2010-01-01 12:00 Now, to maximize a users ability to learn all answers, I should be able to apply an SRS algorithm so that a user, next time he takes the quiz, receives questions incorrectly answered more often; than questions answered correctly. Also, questions that are previously answered incorrectly, but recently often answered correctly should occur less often. Have anyone implemented something like this before? Any tips or suggestions? Theese are the best links I've found: http://en.wikipedia.org/wiki/Spaced_repetition http://www.mnemosyne-proj.org/principles.php http://www.supermemo.com/english/ol/sm2.htm

    Read the article

  • Code golf: combining multiple sorted lists into a single sorted list

    - by Alabaster Codify
    Implement an algorithm to merge an arbitrary number of sorted lists into one sorted list. The aim is to create the smallest working programme, in whatever language you like. For example: input: ((1, 4, 7), (2, 5, 8), (3, 6, 9)) output: (1, 2, 3, 4, 5, 6, 7, 8, 9) input: ((1, 10), (), (2, 5, 6, 7)) output: (1, 2, 5, 6, 7, 10) Note: solutions which concatenate the input lists then use a language-provided sort function are not in-keeping with the spirit of golf, and will not be accepted: sorted(sum(lists,[])) # cheating: out of bounds! Apart from anything else, your algorithm should be (but doesn't have to be) a lot faster! Clearly state the language, any foibles and the character count. Only include meaningful characters in the count, but feel free to add whitespace to the code for artistic / readability purposes. To keep things tidy, suggest improvement in comments or by editing answers where appropriate, rather than creating a new answer for each "revision". EDIT: if I was submitting this question again, I would expand on the "no language provided sort" rule to be "don't concatenate all the lists then sort the result". Existing entries which do concatenate-then-sort are actually very interesting and compact, so I won't retro-actively introduce a rule they break, but feel free to work to the more restrictive spec in new submissions. Inspired by http://stackoverflow.com/questions/464342/combining-two-sorted-lists-in-python

    Read the article

  • Using generics to make an algorithm work on lists of "something" instead of only String's

    - by Binary255
    Hi, I have a small algorithm which replaces the position of characters in a String: class Program { static void Main(string[] args) { String pairSwitchedStr = pairSwitch("some short sentence"); Console.WriteLine(pairSwitchedStr); Console.ReadKey(); } private static String pairSwitch(String str) { StringBuilder pairSwitchedStringBuilder = new StringBuilder(); for (int position = 0; position + 1 < str.Length; position += 2) { pairSwitchedStringBuilder.Append((char)str[position + 1]); pairSwitchedStringBuilder.Append((char)str[position]); } return pairSwitchedStringBuilder.ToString(); } } I would like to make it as generic as possible, possibly using Generics. What I'd like to have is something which works with: Anything that is built up using a list of instances. Including strings, arrays, linked lists I suspect that the solution must use generics as the algorithm is working on a list of instances of T (there T is ... something). Version of C# isn't of interest, I guess the solution will be nicer if features from C# version 2.0 is used.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • Need simple advice for graph solving problem

    - by sap
    Hi there, a collegue of mine proposed to me an exercise from an online judge website, which is basically a graph solving problem of an evacuation plan on a small town. i dont need the answer (nor do i want it) i just need an advice on which is the best approach to solving it since im kinda new to these kind of problems. the problem consists of town buildings with workers and fallout shelters in case of a nuclear attack. i have to build an algorithm that will assign the workers of each building to one or more fallout shelters but in a way that some shelters wont became too overcrowded while others remain almost empty (else i would just make the workers go to the nearest one). the problem is this: http://acm.timus.ru/problem.aspx?space=1&num=1237 in case its offline heres the google cached version of it: http://webcache.googleusercontent.com/search?q=cache:t2EPCzezs7AJ:acm.timus.ru/problem.aspx%3Fspace%3D1%26num%3D1237+vladimir+kotov+evacuation+problem&cd=1&hl=pt-PT&ct=clnk&gl=pt what i've done so far is for each building get the nearest shelter and move the number of workers from that build equal to the shelter capacity. then move to the next building. but sometimes the number of workers is greater than the shelter capacity, in that case after i iterate through every building, ill just iterate then again apllying the same algorithm until every building has 0 workers in it, problem is this is hardly the best way to solve it. any tip is welcome, please dont feel like im asking for the answer, i just want an advice in the right direction of solving it. thanks in advance.

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • split line of text

    - by plys
    Hi all, I was wondering if there is an algorithm to split a line into multiple lines, so that the resulting set of multiple lines fit into a square shape rather than a rectangle. Let me give some examples, Input: Hi this is a really long line. Output: Hi this is a really long line Input: a b c d e f Output: a b c d e f Input: This is really such looooooooooooooooooooong line.This is the end. Output: This is really such looooooooooooooooooooong line This is the end. If you see in the above examples, input line fits into a wide rectangle. But the output more or less fits into a square shape. Essentially what needs to be done here is simply count the number of characters in the line, take the square root of that number. Then put square root number of characters in each line. But in the above example, the splitting needs to be done by respecting word wraps instead of characters. Is there any standard algorithm for this? Any code examples/ pointers would be appreciated!

    Read the article

  • function objects versus function pointers

    - by kumar_m_kiran
    Hi All, I have two questions related to function objects and function pointers, Question : 1 When I read the different uses sort algorithm of STL, I see that the third parameter can be a function objects, below is an example class State { public: //... int population() const; float aveTempF() const; //... }; struct PopLess : public std::binary_function<State,State,bool> { bool operator ()( const State &a, const State &b ) const { return popLess( a, b ); } }; sort( union, union+50, PopLess() ); Question : Now, How does the statement, sort(union, union+50,PopLess()) work? PopLess() must be resolved into something like PopLess tempObject.operator() which would be same as executing the operator () function on a temporary object. I see this as, passing the return value of overloaded operation i.e bool (as in my example) to sort algorithm. So then, How does sort function resolve the third parameter in this case? Question : 2 Question Do we derive any particular advantage of using function objects versus function pointer? If we use below function pointer will it derive any disavantage? inline bool popLess( const State &a, const State &b ) { return a.population() < b.population(); } std::sort( union, union+50, popLess ); // sort by population PS : Both the above references(including example) are from book "C++ Common Knowledge: Essential Intermediate Programming" by "Stephen C. Dewhurst". I was unable to decode the topic content, thus have posted for help. Thanks in advance for your help.

    Read the article

  • Find the Algorithm that generates the checksum

    - by knivmannen
    I have a sensing device that transmits a 6-byte message along with an 1-byte counter and supposely a checksum. The data looks something like this: ------DATA----------- -Counter- --Checksum?-- 55 FF 00 00 EC FF ---- 60---------- 1F The last four bits in the counter are always set 0, i.e those bits are probably not used. The last byte is assumed to be the checksum since it has a quite peculiar nature. It tends to randomly change as data changes. Now what i need is to find the algorithm to compute this checksum based on --DATA--. what i have tried is all possible CRC-8 polynomials, for each polynomial i have tried to reflect data, toggle it, initiate it with non-zeroes etc etc. Ive come to the conclusion that i am not dealing with a normal crc-algorithm. I have also tried some flether and adler methods without succes, xor stuff back and forth but still i have no clue how to generate the checksum. My biggest concern is, how is the counter used??? Same data but with different countervalue generates different checksums. I have tried to include the counter in my computations but without any luck. Here are some other datasamples: 55 FF 00 00 F0 FF A0 38 66 0B EA FF BF FF C0 CA 5E 18 EA FF B7 FF 60 BD F6 30 16 00 FC FE 10 81 One more thing that might be worth mentioning is that the last byte in the data only takes on the values FF or FE Plz if u have any tips or tricks that i may try post them here, I am truly desperate. Thx

    Read the article

  • Enumerate all paths in a weighted graph from A to B where path length is between C1 and C2

    - by awmross
    Given two points A and B in a weighted graph, find all paths from A to B where the length of the path is between C1 and C2. Ideally, each vertex should only be visited once, although this is not a hard requirement. I supose I could use a heuristic to sort the results of the algorithm to weed out "silly" paths (e.g. a path that just visits the same two nodes over and over again) I can think of simple brute force algorithms, but are there any more sophisticed algorithms that will make this more efficient? I can imagine as the graph grows this could become expensive. In the application I am developing, A & B are actually the same point (i.e. the path must return to the start), if that makes any difference. Note that this is an engineering problem, not a computer science problem, so I can use an algorithm that is fast but not necessarily 100% accurate. i.e. it is ok if it returns most of the possible paths, or if most of the paths returned are within the given length range.

    Read the article

  • Machine leaning algorithm for data classification.

    - by twk
    Hi all, I'm looking for some guidance about which techniques/algorithms I should research to solve the following problem. I've currently got an algorithm that clusters similar-sounding mp3s using acoustic fingerprinting. In each cluster, I have all the different metadata (song/artist/album) for each file. For that cluster, I'd like to pick the "best" song/artist/album metadata that matches an existing row in my database, or if there is no best match, decide to insert a new row. For a cluster, there is generally some correct metadata, but individual files have many types of problems: Artist/songs are completely misnamed, or just slightly mispelled the artist/song/album is missing, but the rest of the information is there the song is actually a live recording, but only some of the files in the cluster are labeled as such. there may be very little metadata, in some cases just the file name, which might be artist - song.mp3, or artist - album - song.mp3, or another variation A simple voting algorithm works fairly well, but I'd like to have something I can train on a large set of data that might pick up more nuances than what I've got right now. Any links to papers or similar projects would be greatly appreciated. Thanks!

    Read the article

  • The mathematics of Schellings segregation model

    - by Bruce
    For those who don't know the model. You can read this pdf. I want to find what is the probability that 2 nodes are each others neighbors when the algorithm converges (i.e. when all nodes are happy). Here's the model in a gist. You have a grid (say 10x10). You have nodes of two kind (red and green) 45 each. So we have 10 empty spaces. We randomly place the nodes on the grid. Now we scan through this grid (Exact order does not matter according to Schelling). Each node wants a specific percentage of people of same kind in its Moore neighborhood (say b = 50% for each red and green). We calculate the happiness of each node (a = Number of neighbors of same kind/Number of neighbors of different kind). If a node is unhappy (a < b) it moves to an empty cell where it knows it will be happy. This movement can change the dynamics of old as well as new neighborhood. Algorithm converges when all nodes are happy. PS - I am looking for links for any mathematical analysis of the Schelling's model.

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

  • How to use Caret to tell which line it is in from JTextPane? (Java)

    - by Alex Cheng
    Hi all. Problem: I have CaretListener and DocumentListener listening on a JTextPane. I need an algorithm that is able to tell which line is the caret at in a JTextPane, here's an illustrative example: Result: 3rd line Result: 2nd line Result: 4th line and if the algorithm can tell which line the caret is in the JTextPane, it should be fairly easy to substring whatever that is in between the parentheses as the picture (caret is at character m of metadata): -- This is how I divide the entire text that I retrieved from the JTextPane into sentences: String[] lines = textPane.getText().split("\r?\n|\r", -1); The sentences in the textPane is separated with \n. Problem is, how can I manipulate the caret to let me know at which position and which line it is in? I know the dot of the caret says at which position it is, but I can't tell which line it is at. Assuming if I know which line the caret is, then I can just do lines[<line number>] and manipulate the string from there. In Short: How do I use CaretListener and/or DocumentListener to know which line the caret is currently at, and retrieve the line for further string manipulation? Please help. Thanks. Do let me know if further clarification is needed. Thanks for your time.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >