Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 23/331 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • question about merge algorithm

    - by davit-datuashvili
    hi i have question i know that this question is somehow nonsense but let see i have code to merge two sorted array in a one sorted array here is code in java public class Merge { public static void main(String[]args){ int a[]=new int[]{7,14,23,30,35,40}; int b[]=new int[]{5,8,9,11,50,67,81}; int c[]=new int[a.length+b.length]; int al=0; int bl=0; int cl=0; while (al<a.length && bl<b.length) if (a[al]<b[bl]) c[cl++]=a[al++]; else c[cl++]=b[bl++]; while (al<a.length) c[cl++]=a[al++]; while (bl<b.length) c[cl++]=b[bl++]; for (int j=0;j<c.length;j++){ System.out.println(c[j]); } } } question is why does not work if we write here {} brackets while (al } ?

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • Bewildering SegFault involving STL sort algorithm.

    - by just_wes
    Hello everybody, I am completely perplexed at a seg fault that I seem to be creating. I have: vector<unsigned int> words; and global variable string input; I define my custom compare function: bool wordncompare(unsigned int f, unsigned int s) { int n = k; while (((f < input.size()) && (s < input.size())) && (input[f] == input[s])) { if ((input[f] == ' ') && (--n == 0)) { return false; } f++; s++; } return true; } When I run the code: sort(words.begin(), words.end()); The program exits smoothly. However, when I run the code: sort(words.begin(), words.end(), wordncompare); I generate a SegFault deep within the STL. The GDB back-trace code looks like this: #0 0x00007ffff7b79893 in std::string::size() const () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so.6 #1 0x0000000000400f3f in wordncompare (f=90, s=0) at text_gen2.cpp:40 #2 0x000000000040188d in std::__unguarded_linear_insert<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, unsigned int, bool (*)(unsigned int, unsigned int)> (__last=..., __val=90, __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1735 #3 0x00000000004018df in std::__unguarded_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1812 #4 0x0000000000402562 in std::__final_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1845 #5 0x0000000000402c20 in std::sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:4822 #6 0x00000000004012d2 in main (argc=1, args=0x7fffffffe0b8) at text_gen2.cpp:70 I have similar code in another program, but in that program I am using a vector instead of vector. For the life of me I can't figure out what I'm doing wrong. Thanks!

    Read the article

  • algorithm to find overlaps

    - by Gary
    Hey, Basically I've got some structs of type Ship which are going to go on a board which can have a variable width and height. The information about the ships is read in from a file, and I just need to know the best way to make sure that none of the ships overlap. Here is the structure of Ship: int x // x position of first part of ship int y // y position of first part of ship char dir // direction of the ship, either 'N','S','E' or 'W' int length // length of the ship Also, what would be a good way to handle the directions. Something cleaner than using a switch statement and using a different condition for each direction. Any help would be greatly appreciated!

    Read the article

  • Can anyone simplify this Algorithm for me?

    - by Titan
    Basically I just want to check if one time period overlaps with another. Null end date means till infinity. Can anyone shorten this for me as its quite hard to read at times. Cheers public class TimePeriod { public DateTime StartDate { get; set; } public DateTime? EndDate { get; set; } public bool Overlaps(TimePeriod other) { // Means it overlaps if (other.StartDate == this.StartDate || other.EndDate == this.StartDate || other.StartDate == this.EndDate || other.EndDate == this.EndDate) return true; if(this.StartDate > other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value < other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value < other.EndDate.Value) return true; } // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value > this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value > this.EndDate.Value) return true; } else return true; } else if(this.StartDate < other.StartDate) { // Negative if (this.EndDate.HasValue) { if (this.EndDate.Value > other.StartDate) return true; if (other.EndDate.HasValue && this.EndDate.Value > other.EndDate.Value) return true; } else return true; // Negative if (other.EndDate.HasValue) { if (other.EndDate.Value < this.StartDate) return true; if (this.EndDate.HasValue && other.EndDate.Value < this.EndDate.Value) return true; } } return false; } }

    Read the article

  • Algorithm for data alignment of float arrays in Java

    - by Derk
    I have two float arrays representing y values in a line chart. Now I want to align these two charts. Are there any existing algorithms for alignment of the two arrays? A very simple example a: 2.5 1.3 1.6 4.2 3.6 b: 3.3 1.4 2.5 1.3 1.6 Now after alignment it should be: 2.5 1.3 1.6 4.2 3.6 3.3 1.4 2.5 1.3 1.6

    Read the article

  • Need to devise a number crunching algorithm

    - by Ravi Gupta
    I stumbled upon this question: 7 power 7 is 823543. Which higher power of 7 ends with 823543 ? How should I go about it ? The one I came up with is very slow, it keeps on multiplying by 7 and checks last 6 digits of the result for a match. I tried with Lou's code: int x=1; for (int i=3;i<=100000000;i=i+4){ x=(x*7)%1000000; System.out.println("i="+ i+" x= "+x); if (x==823543){ System.out.println("Ans "+i);} } And CPU sounds like a pressure cooker but couldn't get the answer :(

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • An algorithm Problem

    - by Vignesh
    For coverage, I've a set of run time variables of from my program execution. It happens that I get it from a series of executions(Automated testing). ie. its a vector<vector<var,value>> I've a limited set of variables with expected values and generate combination s, that is I have vector<vector<var,value>(smaller than the execution vector)>. Now I need to compare and tell which of the combination I generated were exactly executed in one of the tests. My algo is O(n^4). Is there any way to bring it down. Something like set intersection. I'm using java, and vectors because of thread safety.

    Read the article

  • Algorithm to produce Cartesian product of arrays in depth-first order

    - by Yuri Gadow
    I'm looking for an example of how, in Ruby, a C like language, or pseudo code, to create the Cartesian product of a variable number of arrays of integers, each of differing length, and step through the results in a particular order: So given, [1,2,3],[1,2,3],[1,2,3]: [1, 1, 1] [2, 1, 1] [1, 2, 1] [1, 1, 2] [2, 2, 1] [1, 2, 2] [2, 1, 2] [2, 2, 2] [3, 1, 1] [1, 3, 1] etc. Instead of the typical result I've seen (including the example I give below): [1, 1, 1] [2, 1, 1] [3, 1, 1] [1, 2, 1] [2, 2, 1] [3, 2, 1] [1, 3, 1] [2, 3, 1] etc. The problem with this example is that the third position isn't explored at all until all combinations of of the first two are tried. In the code that uses this, that means even though the right answer is generally (the much larger equivalent of) 1,1,2 it will examine a few million possibilities instead of just a few thousand before finding it. I'm dealing with result sets of one million to hundreds of millions, so generating them and then sorting isn't doable here and would defeat the reason for ordering them in the first example, which is to find the correct answer sooner and so break out of the cartesian product generation earlier. Just in case it helps clarify any of the above, here's how I do this now (this has correct results and right performance, but not the order I want, i.e., it creates results as in the second listing above): def cartesian(a_of_a) a_of_a_len = a_of_a.size result = Array.new(a_of_a_len) j, k, a2, a2_len = nil, nil, nil, nil i = 0 while 1 do j, k = i, 0 while k < a_of_a_len a2 = a_of_a[k] a2_len = a2.size result[k] = a2[j % a2_len] j /= a2_len k += 1 end return if j > 0 yield result i += 1 end end UPDATE: I didn't make it very clear that I'm after a solution where all the combinations of 1,2 are examined before 3 is added in, then all 3 and 1, then all 3, 2 and 1, then all 3,2. In other words, explore all earlier combinations "horizontally" before "vertically." The precise order in which those possibilities are explored, i.e., 1,1,2 or 2,1,1, doesn't matter, just that all 2 and 1 are explored before mixing in 3 and so on.

    Read the article

  • Is using a FSM a good design for general text parsing?

    - by eSKay
    I am reading a file that is filled with hex numbers. I have to identify a particular pattern, say "aaad" (without quotes) from it. Every time I see the pattern, I generate some data to some other file. This would be a very common case in designing programs - parsing and looking for a particular pattern. I have designed it as a Finite State Machine and structured structured it in C using switch-case to change states. This was the first implementation that occured to me. DESIGN: Are there some better designs possible? IMPLEMENTATION: Do you see some problems with using a switch case as I mentioned?

    Read the article

  • Algorithm for scoring user activity

    - by ManBugra
    I have an application where users can: Write reviews about products Add comments to products Up / Down vote reviews Up / Down vote comments Every Up/Down vote is recorded in a db table. What i want to do now is to create a ranking of the most active users in the last 4 weeks. Of course good reviews should be weighted more than good comments. But also e.g. 10 good comments should be weighted more than just one good review. Example: // reviews created in recent 4 weeks //format: [ upVoteCount, downVoteCount ] var reviews = [ [120,23], [32,12], [12,0], [23,45] ]; // comments created in recent 4 weeks // format: [ upVoteCount, downVoteCount ] var comments = [ [1,2], [322,1], [0,0], [0,45] ]; // create weight vector // format: [ reviewWeight, commentsWeight ] var weight = [0.60, 0.40]; // signature: activties..., activityWeight var userActivityScore = score(reviews, comments, weight); ... update user table ... List<Users> users = "from users u order by u.userActivityScore desc"; How would a fair scoring function look like? How could an implementation of the score() function look like? How to add a weight g to the function so that reviews are weighted heavier? How would such a function look like if, for example, votes for pictures would be added?

    Read the article

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • Creating combinations that have no more one intersecting element

    - by khuss
    I am looking to create a special type of combination in which no two sets have more than one intersecting element. Let me explain with an example: Let us say we have 9 letter set that contains A, B, C, D, E, F, G, H, and I If you create the standard non-repeating combinations of three letters you will have 9C3 sets. These will contain sets like ABC, ABD, BCD, etc. I am looking to create sets that have at the most only 1 common letter. So in this example, we will get following sets: ABC, ADG, AEI, AFH, BEH, BFG, BDI, CFI, CDH, CEG, DEF, and GHI - note that if you take any two sets there are no more than 1 repeating letter. What would be a good way to generate such sets? It should be scalable solution so that I can do it for a set of 1000 letters, with sub set size of 4. Any help is highly appreciated. Thanks

    Read the article

  • Binary Search Tree for specific intent

    - by Luís Guilherme
    We all know there are plenty of self-balancing binary search trees (BST), being the most famous the Red-Black and the AVL. It might be useful to take a look at AA-trees and scapegoat trees too. I want to do deletions insertions and searches, like any other BST. However, it will be common to delete all values in a given range, or deleting whole subtrees. So: I want to insert, search, remove values in O(log n) (balanced tree). I would like to delete a subtree, keeping the whole tree balanced, in O(log n) (worst-case or amortized) It might be useful to delete several values in a row, before balancing the tree I will most often insert 2 values at once, however this is not a rule (just a tip in case there is a tree data structure that takes this into account) Is there a variant of AVL or RB that helps me on this? Scapegoat-trees look more like this, but would also need some changes, anyone who has got experience on them can share some thougts? More precisely, which balancing procedure and/or removal procedure would help me keep this actions time-efficient?

    Read the article

  • mysql/algorithm: Weighting an average to accentuate differences from the mean

    - by Sai Emrys
    This is for a new feature on http://cssfingerprint.com (see /about for general info). The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that. All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like. Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down. For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give. Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status. What's the best way to calculate this? For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?

    Read the article

  • Using static vs. member find method on a STL set?

    - by B Johnson
    I am using a set because, i want to use the quick look up property of a sorted container such as a set. I am wondering if I have to use the find member method to get the benefit of a sorted container, or can I also use the static find method in the STL algorithms? My hunch is that using the static version will use a linear search instead of a binary search like I want.

    Read the article

  • open source gossip-based membership protocol?

    - by Aaron
    I am looking for a library which I can plug into a distributed application which implements any gossip-based membership protocol. Such a library would allow me to send/receive membership lists, merge received membership lists, etc... Even better would be if the library implemented a protocol with performance O(logn) performance guarantees. Does anyone know of any open source library like this? It doesn't need to meet all of the aforementioned requirements; even something partially implemented would be helpful.

    Read the article

  • algorithm for checking addresses for matches?

    - by user151841
    I'm working on a survey program where people will be given promotional considerations the first time they fill out a survey. In a lot of scenarios, the only way we can stop people from cheating the system and getting a promotion they don't deserve is to check street address strings against each other. I was looking at using levenshtein distance to give me a number to measure similarity, and consider those below a certain threshold a duplicate. However, if someone were looking to game the system, they could easily write "S 5th St" instead of "South Fifth Street", and levenshtein would consider those strings to be very different. So then I was thinking to convert all strings to a 'standard address form' i.e. 'South' becomes 's', 'Fifth' becomes '5th', etc. Then I was thinking this is hopeless, and too much effort to get it working robustly. Is it? I'm working with PHP/MySql, so I have the limitations inherent in that system.

    Read the article

  • Algorithm for bitwise fiddling

    - by EquinoX
    If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator? For example the 32-bit binary number is: 1010 0000 1011 1111 0100 1000 1010 1001 and the lower 16-bit I have is: 0000 0000 0000 0001 so the result is: 1010 0000 1011 1111 0000 0000 0000 0001 how can I do this?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >