Search Results

Search found 5070 results on 203 pages for 'algorithm'.

Page 78/203 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • String operations

    - by NEO
    I am trying to find the most repeated word in a string. My code is as follows: public class Word { private String toWord; private int Count; public Word(int count, String word){ toWord = word; Count = count; } public static void main(String args[]){ String str="my name is neo and my other name is also neo because I am neo"; String []str1=str.split(" "); Word w1=new Word(0,str1[0]); LinkedList<Word> list = new LinkedList<Word>(); list.add(w1); ListIterator itr = list.listIterator(); for(int i=1;i<str1.length;i++){ while(itr.hasNext()){ if(str1[i].equalsTO(????)); else list.add(new Word(0,str1[i])); } How do I compare the string from string array str1 to the string stored in the linked list and then how do i increase the respective count. I will then print the string with the highest count, I dont know how to do that either.

    Read the article

  • Graph - strongly connected components

    - by HardCoder1986
    Hello! Is there any fast way to determine the size of the largest strongly connected component in a graph? I mean, like, the obvious approach would mean determining every SCC (could be done using two DFS calls, I suppose) and then looping through them and taking the maximum. I'm pretty sure there has to be some better approach if I only need to have the size of that component and only the largest one, but I can't think of a good solution. Any ideas? Thanks.

    Read the article

  • Find subset with K elements that are closest to eachother

    - by Nima
    Given an array of integers size N, how can you efficiently find a subset of size K with elements that are closest to each other? Let the closeness for a subset (x1,x2,x3,..xk) be defined as: 2 <= N <= 10^5 2 <= K <= N constraints: Array may contain duplicates and is not guaranteed to be sorted. My brute force solution is very slow for large N, and it doesn't check if there's more than 1 solution: N = input() K = input() assert 2 <= N <= 10**5 assert 2 <= K <= N a = [] for i in xrange(0, N): a.append(input()) a.sort() minimum = sys.maxint startindex = 0 for i in xrange(0,N-K+1): last = i + K tmp = 0 for j in xrange(i, last): for l in xrange(j+1, last): tmp += abs(a[j]-a[l]) if(tmp > minimum): break if(tmp < minimum): minimum = tmp startindex = i #end index = startindex + K? Examples: N = 7 K = 3 array = [10,100,300,200,1000,20,30] result = [10,20,30] N = 10 K = 4 array = [1,2,3,4,10,20,30,40,100,200] result = [1,2,3,4]

    Read the article

  • Accelerometer gravity components

    - by Dvd
    Hi, I know this question is definitely solved somewhere many times already, please enlighten me if you know of their existence, thanks. Quick rundown: I want to compute from a 3 axis accelerometer the gravity component on each of these 3 axes. I have used 2 axes free body diagrams to work out the accelerometer's gravity component in the world X-Z, Y-Z and X-Y axes. But the solution seems slightly off, it's acceptable for extreme cases when only 1 accelerometer axis is exposed to gravity, but for a pitch and roll of both 45 degrees, the combined total magnitude is greater than gravity (obtained by Xa^2+Ya^2+Za^2=g^2; Xa, Ya and Za are accelerometer readings in its X, Y and Z axis). More detail: The device is a Nexus One, and have a magnetic field sensor for azimuth, pitch and roll in addition to the 3-axis accelerometer. In the world's axis (with Z in the same direction as gravity, and either X or Y points to the north pole, don't think this matters much?), I assumed my device has a pitch (P) on the Y-Z axis, and a roll (R) on the X-Z axis. With that I used simple trig to get: Sin(R)=Ax/Gxz Cos(R)=Az/Gxz Tan(R)=Ax/Az There is another set for pitch, P. Now I defined gravity to have 3 components in the world's axis, a Gxz that is measurable only in the X-Z axis, a Gyz for Y-Z, and a Gxy for X-Y axis. Gxz^2+Gyz^2+Gxy^2=2*G^2 the 2G is because gravity is effectively included twice in this definition. Oh and the X-Y axis produce something more exotic... I'll explain if required later. From these equations I obtained a formula for Az, and removed the tan operations because I don't know how to handle tan90 calculations (it's infinity?). So my question is, anyone know whether I did this right/wrong or able to point me to the right direction? Thanks! Dvd

    Read the article

  • How to draw/manage a hexagon grid?

    - by W.N.
    I've read this article: generating/creating hexagon grid in C . But look like both the author and answerer have already abandoned it. v(hexagonSide - hexagonWidth * hexagonWidth): What's hexagonSide and hexagonWidth? Isn't it will < 0 (so square root can't be calculated). And, can I put a hexagon into a rectangle? I need to create a grid like this: One more thing, how can I arrange my array to store data, as well as get which cells are next to one cell? I have never been taught about hexagon, so I know nothing about it, but I can easily learn new thing, so if you can explain or give me a clue, I may do it myself.

    Read the article

  • Point data structure for a sketching application

    - by bebraw
    I am currently developing a little sketching application based on HTML5 Canvas element. There is one particular problem I haven't yet managed to find a proper solution for. The idea is that the user will be able to manipulate existing stroke data (points) quite freely. This includes pushing point data around (ie. magnet tool) and manipulating it at whim otherwise (ie. altering color). Note that the current brush engine is able to shade by taking existing stroke data in count. It's a quick and dirty solution as it just iterates the points in the current stroke and checks them against a distance rule. Now the problem is how to do this in a nice manner. It is extremely important to be able to perform efficient queries that return all points within given canvas coordinate and radius. Other features, such as space usage, should be secondary to this. I don't mind doing some extra processing between strokes while the user is not painting. Any pointers are welcome. :)

    Read the article

  • Algorithm - combine multiple lists, resulting in unique list and retaining order

    - by hitch
    I want to combine multiple lists of items into a single list, retaining the overall order requirements. i.e.: 1: A C E 2: D E 3: B A D result: B A C D E above, starting with list 1, we have ACE, we then know that D must come before E, and from list 3, we know that B must come before A, and D must come after B and A. If there are conflicting orderings, the first ordering should be used. i.e. 1: A C E 2: B D E 3: F D result: A C B D E F 3 conflicts with 2, therefore requirements for 2 will be used. If ordering requirements mean an item must come before or after another, it doesn't matter if it comes immediately before or after, or at the start or end of the list, as long as overall ordering is maintained. This is being developed using VB.Net, so a LINQy solution (or any .Net solution) would be nice - otherwise pointers for an approach would be good.

    Read the article

  • Trying to generate all sequences of specified numbers up to a max sum

    - by Stecy
    Hi, Given the following list of descending unique numbers (3,2,1) I wish to generate all the sequences consisting of those numbers up to a maximum sum. Let's say that the sum should be below 10. Then the sequences I'm after are: 3 3 3 3 3 2 1 3 3 2 3 3 1 1 1 3 3 1 1 3 3 1 3 3 3 2 2 2 3 2 2 1 1 3 2 2 1 3 2 2 3 2 1 1 1 1 3 2 1 1 1 3 2 1 1 3 2 1 3 2 3 1 1 1 1 1 1 3 1 1 1 1 1 3 1 1 1 1 3 1 1 1 3 1 1 3 1 3 2 2 2 2 1 2 2 2 2 2 2 2 1 1 1 2 2 2 1 1 2 2 2 1 2 2 2 2 2 1 1 1 1 1 2 2 1 1 1 1 2 2 1 1 1 2 2 1 1 2 2 1 2 2 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 2 1 1 1 2 1 1 2 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 I'm sure that there's a "standard" way to generate this. I thought of using linq but can't figure it out. Also, I am trying a stack based approach but I am still not successful. Any idea?

    Read the article

  • C# - Google like query engine.-

    - by MRFerocius
    Guys; Hope you are fine. I have to make a Web Project (very simple) I will have a DB with 2 tables. One table has 2 fields. From the WebPage I need a Google like search query, for example I have Movie Title and Movie Review on the Table. I need to be able to search those 2 fields like this: "Best Movie" + Action I will need to make a query to the DB to search for the "Best Movie" string togheter plus optional ACTION word on 2 fields of the table. Am I clear??? :) Does somebody know if this has already been made, and if it´s public and free and where to get it :) Thanks in advanced EDIT: My concern is to translate the Google like Symbols ("", +, -, ~) to build a valid query.

    Read the article

  • Maintaing list order in binary tree.

    - by TheBigO
    Given a sequence of numbers, I want to insert the numbers into a balanced binary tree such that when I do a preorder traversal on the tree, it gives me the sequence back. How can I construct the insert method corresponding to this requirement? Remember that the tree must be balanced, so there isn't a completely trivial solution. I was trying to do this with a modified version of an AVL tree, but I'm not sure if this can work out.

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • How to scale JPEG image down so that text is clear as possible?

    - by Juha Syrjälä
    I have some JPEG images that I need scale down to about 80% of original size. Original image dimension are about 700px × 1000px. Images contain some computer generated text and possibly some graphics (similar to what you would find in corporate word documents). How to scale image so that the text is as legible as possible? Currently we are scaling the imaeg down using bicubic interpolation, but that makes the text blurry and foggy.

    Read the article

  • Algorithm detect repeating/similiar strings in a corpus of data -- say email subjects, in Python

    - by RizwanK
    I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.) I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject. I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines.... Now, many subject lines would fail a string match, but "Google Friends : Our latest news" "Google Friends : What we're doing today" are more similar to each other than a random subject line, as is: "Virgin Airlines has a great sale today" "Take a flight with Virgin Airlines" So -- how can I start to automagically extract trends/examples of strings that may be more similar. Approaches I've considered and discarded ('because there must be some better way'): Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones Stripping off the first word or two and then count the occurrence of each sub string Comparing Levenshtein distance between entries Some sort of string similarity index ... Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..? In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set. After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one. Any guidance would be appreciated.

    Read the article

  • Efficiently solving sparse matrices

    - by anon
    For solving spare matrices, in general, how big does the matrix have to be (as a rule of thumb) for methods like congraduate descent to be faster than brute force solvers (that do not take advantage o sparsity)?

    Read the article

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • Convert arbitrary size of byte[] to BigInteger[] and then safely convert back to exactly the same by

    - by PatlaDJ
    I believe conversion exactly to BigInteger[] would be optimal in my case. Anyone had done or found this written in Java and willing to share? So imagine I have arbitrary size byte[] = {0xff,0x3e,0x12,0x45,0x1d,0x11,0x2a,0x80,0x81,0x45,0x1d,0x11,0x2a,0x80,0x81} How do I convert it to array of BigInteger's and then be able to recover it back the original byte array safely? ty in advance.

    Read the article

  • how can we make class with Linked list recursion ...

    - by epsilon_G
    Hi , I'm newbee in The C++ heaven and the OOP, ... I'd like to make some stuffs with the Data structures ... However,I'd like to merge two linked listes ... I made a class before "List" wich contain all what I need to programme something with List ... Add , Display .. The probleme is that I programmed the function "Merge2lists" which give us the third list .. How can I display the third list in the main program after using "Merge2lists" Plz , my prob with the Class and the OOP syntaxe ... try to give me the implementation in the main program ??? Otherwise,How can I apply function given pointer in main program wich declared in class .. Thankx class Liste { private: struct node { int elem ; node *next; }*p; public: LLC(); void Merge2lists (node* a, node * b,node *&result); ~LLC(); }; void List::Merge2lists (node* a, node * b,node *&result) { result = NULL; if (a==NULL) { result=b; return;} else if (b==NULL) { result=a; return;} if (a->elem <= b->elem) { result = a; Merge2lists(a->next, b,result->next); } else { result = b; Merge2lists(a, b->next,result->next); } return; } int main() { liste a ; a.Aff_Val(46); a.Aff_Val(54); a.add_as_first(2); a.add_as_first(1); a.Display(); /*This to displat the elemnts ... Don't care about it it's easy to make*/ list liste2; b.Add(2); b.Add(14); b.Add(16); list result; Merge2lists (a,b,result); /*The probleme is here , how can I use this in my program */

    Read the article

  • C++ find largest BST in a binary tree

    - by fonjibe
    what is your approach to have the largest BST in a binary tree? I refer to this post where a very good implementation for finding if a tree is BST or not is bool isBinarySearchTree(BinaryTree * n, int min=std::numeric_limits<int>::min(), int max=std::numeric_limits<int>::max()) { return !n || (min < n->value && n->value < max && isBinarySearchTree(n->l, min, n->value) && isBinarySearchTree(n->r, n->value, max)); } It is quite easy to implement a solution to find whether a tree contains a binary search tree. i think that the following method makes it: bool includeSomeBST(BinaryTree* n) { if(!isBinarySearchTree(n)) { if(!isBinarySearchTree(n->left)) return isBinarySearchTree(n->right); } else return true; else return true; } but what if i want the largest BST? this is my first idea, BinaryTree largestBST(BinaryTree* n) { if(isBinarySearchTree(n)) return n; if(!isBinarySearchTree(n->left)) { if(!isBinarySearchTree(n->right)) if(includeSomeBST(n->right)) return largestBST(n->right); else if(includeSomeBST(n->left)) return largestBST(n->left); else return NULL; else return n->right; } else return n->left; } but its not telling the largest actually. i struggle to make the comparison. how should it take place? thanks

    Read the article

  • How to calculate this string-dissimilarity function efficiently?

    - by ybungalobill
    Hello, I was looking for a string metric that have the property that moving around large blocks in a string won't affect the distance so much. So "helloworld" is close to "worldhello". Obviously Levenshtein distance and Longest common subsequence don't fulfill this requirement. Using Jaccard distance on the set of n-grams gives good results but has other drawbacks (it's a pseudometric and higher n results in higher penalty for changing single character). [original research] As I thought about it, what I'm looking for is a function f(A,B) such that f(A,B)+1 equals the minimum number of blocks that one have to divide A into (A1 ... An), apply a permutation on the blocks and get B: f("hello", "hello") = 0 f("helloworld", "worldhello") = 1 // hello world -> world hello f("abba", "baba") = 2 // ab b a -> b ab a f("computer", "copmuter") = 3 // co m p uter -> co p m uter This can be extended for A and B that aren't necessarily permutations of each other: any additional character that can't be matched is considered as one additional block. f("computer", "combuter") = 3 // com uter -> com uter, unmatched: p and b. Observing that instead of counting blocks we can count the number of pairs of indices that are taken apart by a permutation, we can write f(A,B) formally as: f(A,B) = min { C(P) | P:|A|?|B|, P is bijective, ?i?dom(P) A[P(i)]=B[P(i)] } C(P) = |A| + |B| - |dom(P)| - |{ i | i,i+1?dom(P) and P(i)+1=P(i+1) }| - 1 The problem is... guess what... ... that I'm not able to calculate this in polynomial time. Can someone suggest a way to do this efficiently? Or perhaps point me to already known metric that exhibits similar properties?

    Read the article

  • How to notice unusual news activity

    - by ??iu
    Suppose you were able keep track of the news mentions of different entities, like say "Steve Jobs" and "Steve Ballmer". What are ways that could you tell whether the amount of mentions per entity per a given time period was unusual relative to their normal degree of frequency of appearance? I imagine that for a more popular person like Steve Jobs an increase of like 50% might be unusual (an increase of 1000 to 1500), while for a relatively unknown CEO an increase of 1000% for a given day could be possible (an increase of 2 to 200). If you didn't have a way of scaling that your unusualness index could be dominated by unheard-ofs getting their 15 minutes of fame.

    Read the article

  • question bitonic sequence

    - by davit-datuashvili
    A sequence is bitonic if it monotonically increases and then monotonically de- creases, or if it can be circularly shifted to monotonically increase and then monotonically decrease. For example the sequences 1, 4, 6, 8, 3, -2 , 9, 2, -4, -10, -5 , and 1, 2, 3, 4 are bitonic, but 1, 3, 12, 4, 2, 10 is not bitonic. please help me to determine if given sequence is bitonic?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >