Search Results

Search found 6716 results on 269 pages for 'distributed algorithm'.

Page 48/269 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Sorting a 2 dimensional array on multiple columns

    - by Anon
    I need to sort a 2 dimensional array of doubles on multiple columns using either C or C++. Could someone point me to the algorithm that I should use or an existing library (perhaps boost?) that has this functionality? I have a feeling that writing a recursive function may be the way to go but I am too lazy to write out the algorithm or implement it myself if it has been done elsewhere. :-) Thanks

    Read the article

  • Drawing Directed Acyclic Graphs: Using DAG property to improve layout/edge routing?

    - by Robert Fraser
    Hi, Laying out the verticies in a DAG in a tree form (i.e. verticies with no in-edges on top, verticies dependent only on those on the next level, etc.) is rather simple. However, is there a simple algorithm to do this that minimizes edge crossing? (For some graphs, it may be impossible to completely eliminate edge crossing.) A picture says a thousand words, so is there an algorithm that would suggest: instead of:

    Read the article

  • how to swap array-elements to transfer the array from a column-like into a row-like representation

    - by Christian Ammer
    For example: the array a1, a2, a3, b1, b2, b3, c1, c2, c3, d1, d2, d3 represents following table a1, b1, c1, d1 a2, b2, c2, d2 a3, b3, c3, d3 now i like to bring the array into following form a1, b1, c1, d1, a2, b2, c2, d2, a3, b3, c3, d3 Does an algorithm exist, which takes the array (from the first form) and the dimensions of the table as input arguments and which transfers the array into the second form? I thougt of an algorithm which doesn't need to allocate additional memory, instead i think it should be possible to do the job with element-swap operations.

    Read the article

  • Detect session hang and kill it

    - by Jack Juiceson
    Hi all, I have an asp.net page that runs certain algorithm and returns it's output. I was wondering what will happen and how to handle a case where the algorithm due to a bug goes into infinite loop. It will hog the cpu and other sessions will be served very slowly. I would love to have a way to tell IIS, if processing Algo.aspx takes more than 5 seconds, kill it or something like that. Thanks in advance

    Read the article

  • Algorithm performance

    - by william007
    I am testing an algorithm for different parameters on a computer. I notice the performance fluctuates for each parameters. Say I run for the first time I got 20 ms, second times I got 5ms, third times I got 4ms: But the algorithm should work the same for these 3 times. I am using stopwatch from C# library to count the time, is there a better way to measure the performance without subjecting to those fluctuations?

    Read the article

  • Python - Is a dictionary slow to find frequency of each character?

    - by psihodelia
    I am trying to find a frequency of each symbol in any given text using an algorithm of O(n) complexity. My algorithm looks like: s = len(text) P = 1.0/s freqs = {} for char in text: try: freqs[char]+=P except: freqs[char]=P but I doubt that this dictionary-method is fast enough, because it depends on the underlying implementation of the dictionary methods. Is this the fastest method?

    Read the article

  • Number of 0/1 colorings of a m X n rectangle which have no monochromatic subrectangles with both dimension greater than 1.

    - by acbruptenda
    A m x n rectangular matrix is give, and each cell is to be filled with 0/1 colour. I have to find number of colorings possible so that there is no monochromatic coloured (same colour) sub-rectangle whose both dimension is greater than 1 (eg - 2x2, 2x3,4x3) I have found a slightly different version of it here But they have said nothing about the algorithm. So, I am looking for an algorithm here !!

    Read the article

  • What is the best way to find the period of a (repeating) list in Mathematica?

    - by Arnoud Buzing
    What is the best way to find the period in a repeating list? For example: a = {4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2} has repeat {4, 5, 1, 2, 3} with the remainder {4, 5, 1, 2} matching, but being incomplete. The algorithm should be fast enough to handle longer cases, like so: b = RandomInteger[10000, {100}]; a = Join[b, b, b, b, Take[b, 27]] The algorithm should return $Failed if there is no repeating pattern like above.

    Read the article

  • unique substrings using suffix tree

    - by user1708762
    For a given string S of length n- Optimal algorithm for finding all unique substrings of S can't be less than O(n^2). So, the best algorithm will give us the complexity of O(n^2). As per what I have read, this can be implemented by creating suffix tree for S. The suffix tree for S can be created in O(n) time. Now, my question is- How can we use the suffix tree for S to get all the unique substrings of S in O(n^2)?

    Read the article

  • SSLCipherSuite - disable weak encryption, cbc cipher and md5 based algorithm

    - by John
    A developer recently ran a PCI Scan with TripWire against our LAMP server. They identified several issues and instructed the following to correct the issues: Problem: SSL Server Supports Weak Encryption for SSLv3, TLSv1, Solution: Add the following rule to httpd.conf SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM Problem: SSL Server Supports CBC Ciphers for SSLv3, TLSv1 Solution: Disable any cipher suites using CBC ciphers Problem: SSL Server Supports Weak MAC Algorithm for SSLv3, TLSv1 Solution: Disable any cipher suites using MD5 based MAC algorithms I tried searching google for a comprehensive tutorial on how to construct an SSLCipherSuite directive to meet my requirements, but I didn't find anything I could understand. I see examples of SSLCipherSuite directives, but I need an explanation on what each component of the directive does. So even in the directive SSLCipherSuite ALL:!aNULL:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM, I dont understand for example what the !LOW means. Can someone either a) tell me the SSLCipherSuite directive that will meet my needs or b) show me a resource that clearly explains each segment of a SSLCipherSuite is and how to construct one?

    Read the article

  • What is the difference between these two find algorithms? [migrated]

    - by Joe
    I have these two find algorithm which look the same to me. Can anyone help me out why they are actually different? Find ( x ) : if x.parent = x then return x else return Find ( x.parent ) vs Find ( x ) : if x.parent = x then return x else x.parent <- Find(x.parent) return x.parent I interpret the first one as int i = 0; return i++; while the second one as int i = 0; int tmp = i++; return tmp which are exactly the same to me.

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

  • What algorithms do "the big ones" use to cluster news?

    - by marco92w
    I want to cluster texts for a news website. At the moment I use this algorithm to find the related articles. But I found out that PHP's similar_text() gives very good results, too. What sort of algorithms do "the big ones", Google News, Topix, Techmeme, Wikio, Megite etc., use? Of course, you don't know exactly how the algorithms work. It's secret. But maybe someone knows approximately the way they work? The algorithm I use at the moment is very slow. It only compares two articles. So for having the relations between 5,000 articles you need about 12,500,000 comparisons. This is quite a lot. Are there alternatives to reduce the number of necessary comparisons? [I don't look for improvements for my algorithm.] What do "the big ones" do? I'm sure they don't always compare one article to another and this 12,500,000 times for 5,000 news. It would be great if somebody can say something about this topic.

    Read the article

  • What Algorithm will Find New Longtail Keywords for *keyword* in PPC

    - by Becci
    I am looking for the algorithm (or combo) that would allow someone to find new longtail PPC search phrases based on say one corekeyword. Eg #1 word word corekeyword eg #2 word corekeyword word Google search tool allows a limited number vertically - mostly of eg#1 (https://adwords.google.com.au/select/KeywordToolExternal) I also know of other PPC apps that allow more volume than google adwords keyword tool, But I want to find other combos that mention the corekeyword & then naturally sort for the highest volume searched. Working example of exact match: corekeyword: copywriter (40,500 searches a month) google will serve up: become a copywriter (480 searches globally/month in english) But if I specifically look up: How to become a copywriter (720 searches a month) This exact longtail keyword phrase has 300 more searches than the 3 word version spat out by google. I want the algorithm to find any other highly search exact longtials like: how to become a copywriter Simply because it was save significant $ finding other longtail keywords after your campaign has been running an made google lots of money. I don't want a concantenation algorithm (I already have one of those), because hypothetically, I don't know what keywords will be that I want to find. Any gurus out there? Becci

    Read the article

  • looking for a license key algorithm.

    - by giulio
    There are a lot of questions relating to license keys asked on stackoverflow. But they don't answer this question. Can anyone provide a simple license key algorithm that is technology independent and doesn't required a diploma in mathematics to understand ? The license key algorithm is similar to public key encryption. I just need something simple that can be implemented in any platform .Net/Java and uses simple data like characters. Preferably no byte translations required. So if a person presents a string, a complementary string can be generated that is the authorisation code. Below is a common scenario that it would be used for. Customer downloads s/w which generates a unique key upon initial startup/installation. S/w runs during trial period. At end of trial period an authorisation key is required. Customer goes to designated web-site, enters their code and get authorisation code to enable s/w, after paying :) Don't be afraid to describe your answer as though you're talking to a 5 yr old as I am not a mathemtician. Just need a decent basic algorithm, we're not launching nukes... NB: Please no philosophy on encryption nor who is Diffie-Hellman. I just need a basic solution.

    Read the article

  • How to keep only duplicates efficiently?

    - by Marc Eaddy
    Given an STL vector, I'd like an algorithm that outputs only the duplicates in sorted order, e.g., INPUT : { 4, 4, 1, 2, 3, 2, 3 } OUTPUT: { 2, 3, 4 } The algorithm is trivial, but the goal is to make it as efficient as std::unique(). My naive implementation modifies the container in-place: My naive implementation: void keep_duplicates(vector<int>* pv) { // Sort (in-place) so we can find duplicates in linear time sort(pv->begin(), pv->end()); vector<int>::iterator it_start = pv->begin(); while (it_start != pv->end()) { size_t nKeep = 0; // Find the next different element vector<int>::iterator it_stop = it_start + 1; while (it_stop != pv->end() && *it_start == *it_stop) { nKeep = 1; // This gets set redundantly ++it_stop; } // If the element is a duplicate, keep only the first one (nKeep=1). // Otherwise, the element is not duplicated so erase it (nKeep=0). it_start = pv->erase(it_start + nKeep, it_stop); } } If you can make this more efficient, elegant, or general, please let me know. For example, a custom sorting algorithm, or copy elements in the 2nd loop to eliminate the erase() call.

    Read the article

  • Bubble sort algorithm implementations (Haskell vs. C)

    - by kingping
    Hello. I have written 2 implementation of bubble sort algorithm in C and Haskell. Haskell implementation: module Main where main = do contents <- readFile "./data" print "Data loaded. Sorting.." let newcontents = bubblesort contents writeFile "./data_new_ghc" newcontents print "Sorting done" bubblesort list = sort list [] False rev = reverse -- separated. To see rev2 = reverse -- who calls the routine sort (x1:x2:xs) acc _ | x1 > x2 = sort (x1:xs) (x2:acc) True sort (x1:xs) acc flag = sort xs (x1:acc) flag sort [] acc True = sort (rev acc) [] False sort _ acc _ = rev2 acc I've compared these two implementations having run both on file with size of 20 KiB. C implementation took about a second, Haskell — about 1 min 10 sec. I have also profiled the Haskell application: Compile for profiling: C:\Temp ghc -prof -auto-all -O --make Main Profile: C:\Temp Main.exe +RTS -p and got these results. This is a pseudocode of the algorithm: procedure bubbleSort( A : list of sortable items ) defined as: do swapped := false for each i in 0 to length(A) - 2 inclusive do: if A[i] > A[i+1] then swap( A[i], A[i+1] ) swapped := true end if end for while swapped end procedure I wonder if it's possible to make Haskell implementation work faster without changing the algorithm (there's are actually a few tricks to make it work faster, but neither implementations have these optimizations)

    Read the article

  • AStar in a specific case in C#

    - by KiTe
    Hello. To an intership, I have use the A* algorithm in the following case : the unit shape is a square of height and width of 1, we can travel from a zone represented by a rectangle from another, but we can't travel outside these predifined areas, we can go from a rectangle to another through a door, represented by a segment on corresponding square edge. Here are the 2 things I already did but which didn't satisfied my boss : 1 : I created the following classes : -a Door class which contains the location of the 2 separated squares and the door's orientation (top, left, bottom, right), -a Map class which contains a door list, a rectangle list representing the walkable areas and a 2D array representing the ground's squares (for additionnal infomations through an enumeration) - classes for the A* algorithm (node, AStar) 2 : -a MapCase class, which contains information about the case effect and doors through an enumeration (with [FLAGS] attribute set on, to be able to cummulate several information on each case) -a Map classes which only contains a 2D array of MapCase classes - the classes for the A* algorithm (still node an AStar). Since the 2 version is better than the first (less useless calculation, better map classes architecture), my boss is not still satisfied about my mapping classes architecture. The A* and node classes are good and easily mainainable, so I don't think I have to explain them deeper for now. So here is my asking : has somebody a good idea to implement the A* with the problem specification (rectangle walkable but with a square unit area, travelling through doors)? He said that a grid vision of the problem (so a 2D array) shouldn't be the correct way to solve the problem. I wish I've been clear while exposing my problem .. Thanks KiTe

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Need help implementing this algorithm with map Hadoop MapReduce

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. EDIT(i did not clarified i previos version)The data set I have is on a Hadoop cluster, and I should make its MapReduce implementation I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • How toget a list of "fastest miles" from a set of GPS Points

    - by santiagobasulto
    I'm trying to solve a weird problem. Maybe you guys know of some algorithm that takes care of this. I have data for a cargo freight truck and want to extract some data. Suppose I've got a list of sorted points that I get from the GPS. That's the route for that truck: [ { "lng": "-111.5373066", "lat": "40.7231711", "time": "1970-01-01T00:00:04Z", "elev": "1942.1789265256325" }, { "lng": "-111.5372056", "lat": "40.7228762", "time": "1970-01-01T00:00:07Z", "elev": "1942.109892409177" } ] Now, what I want to get is a list of the "fastest miles". I'll do an example: Given the points: A, B, C, D, E, F the distance from point A to point B is 1 mile, and the cargo took 10:32 minutes. From point B to point D i've got other mile, and the cargo took 10 minutes, etc. So, i need a list sorted by time. Similar to: B -> D: 10 A -> B: 10:32 D -> F: 11:02 Do you know any efficient algorithm that let me calculate that? Thank you all. PS: I'm using Python. EDIT: I've got the distance. I know how to calculate it and there are plenty of posts to do that. What I need is an algorithm to tokenize by mile and get speed from that. Having a distance function is not helpful enough: results = {} for point in points: aux_points = points.takeWhile(point>n) #This doesn't exist, just trying to be simple for aux_point in aux_points: d = distance(point, aux_point) if d == 1_MILE: time_elapsed = time(point, aux_point) results[time_elapsed] = (point, aux_point) I'm still doing some pretty inefficient calculations.

    Read the article

  • Django: Applying Calculations To A Query Set

    - by TheLizardKing
    I have a QuerySet that I wish to pass to a generic view for pagination: links = Link.objects.annotate(votes=Count('vote')).order_by('-created')[:300] This is my "hot" page which lists my 300 latest submissions (10 pages of 30 links each). I want to now sort this QuerySet by an algorithm that HackerNews uses: (p - 1) / (t + 2)^1.5 p = votes minus submitter's initial vote t = age of submission in hours Now because applying this algorithm over the entire database would be pretty costly I am content with just the last 300 submissions. My site is unlikely to be the next digg/reddit so while scalability is a plus it is required. My question is now how do I iterate over my QuerySet and sort it by the above algorithm? For more information, here are my applicable models: class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Notes: I don't have "downvotes" so just the presence of a Vote row is an indicator of a vote or a particular link by a particular user.

    Read the article

  • Geohashing - recursively find neighbors of neighbors

    - by itsme
    I am now looking for an elegant algorithm to recursively find neighbors of neighbors with the geohashing algorithm (http://www.geohash.org). Basically take a central geohash, and then get the first 'ring' of same-size hashes around it (8 elements), then, in the next step, get the next ring around the first etc. etc. Have you heard of an elegant way to do so? Brute force could be to take each neighbor and get their neighbors simply ignoring the massive overlap. Neighbors around one central geohash has been solved many times (here e.g. in Ruby: http://github.com/masuidrive/pr_geohash/blob/master/lib/pr_geohash.rb) Edit for clarification: Current solution, with passing in a center key and a direction, like this (with corresponding lookup-tables): def adjacent(geohash, dir) base, lastChr = geohash[0..-2], geohash[-1,1] type = (geohash.length % 2)==1 ? :odd : :even if BORDERS[dir][type].include?(lastChr) base = adjacent(base, dir) end base + BASE32[NEIGHBORS[dir][type].index(lastChr),1] end (extract from Yuichiro MASUI's lib) I say this approach will get ugly soon, because directions gets ugly once we are in ring two or three. The algorithm would ideally simply take two parameters, the center area and the distance from 0 being the center geohash only (["u0m"] and 1 being the first ring made of 8 geohashes of the same size around it (= [["u0t", "u0w"], ["u0q", "u0n"], ["u0j", "u0h"], ["u0k", "u0s"]]). two being the second ring with 16 areas around the first ring etc. Do you see any way to deduce the 'rings' from the bits in an elegant way?

    Read the article

  • Determining the chances of an event occurring when it hasn't occurred yet

    - by sanity
    A user visits my website at time t, and they may or may not click on a particular link I care about, if they do I record the fact that they clicked the link, and also the duration since t that they clicked it, call this d. I need an algorithm that allows me to create a class like this: class ClickProbabilityEstimate { public void reportImpression(long id); public void reportClick(long id); public double estimateClickProbability(long id); } Every impression gets a unique id, and this is used when reporting a click to indicate which impression the click belongs to. I need an algorithm that will return a probability, based on how much time has past since an impression was reported, that the impression will receive a click, based on how long previous clicks required. Clearly one would expect that this probability will decrease over time if there is still no click. If necessary, we can set an upper-bound, beyond which we consider the click probability to be 0 (eg. if its been an hour since the impression occurred, we can be pretty sure there won't be a click). The algorithm should be both space and time efficient, and hopefully make as few assumptions as possible, while being elegant. Ease of implementation would also be nice. Any ideas?

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >