Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 23/300 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Algorithm for bitwise fiddling

    - by EquinoX
    If I have a 32-bit binary number and I want to replace the lower 16-bit of the binary number with a 16-bit number that I have and keep the upper 16-bit of that number to produce a new binary number.. how can I do this using simple bitwise operator? For example the 32-bit binary number is: 1010 0000 1011 1111 0100 1000 1010 1001 and the lower 16-bit I have is: 0000 0000 0000 0001 so the result is: 1010 0000 1011 1111 0000 0000 0000 0001 how can I do this?

    Read the article

  • Analysis Services with excel as front end - is it possible to get the nicer UI that powerpivot provi

    - by AJM
    I have been looking into PowerPivot and concluded that for "self service BI" and ahoc buidling of cubes it has its uses. In particular I like the enhanced UI that you get from using PowerPivot rather than just using a PivotTable hooked up to an analysis services datasource. However it seems that hooking up PowerPivot to an existing analysis services cube is not a solution for "organisational BI". It is not always desireable to suck millions of rows into excel at once and the interface between PowerPivot and analysis services is very poor in my book. Hence the question is can an existing analysis services solution get the enhanced ui features that power pivot brings, withoout using powerpivot as the design tool? If powerpivot is aimed ad self service/personal BI then it seems bizare that the UI for this is better than for bigger/more costly analysis services solutions.

    Read the article

  • Better algorithm for estimating download time

    - by Scott Smith
    We've all seen the download time running estimate that initially says something like "7 days", but keeps dropping wildly (e.g. "23 hours", "45 minutes", "1 min. 50 sec", etc) with each successive estimation as the chunks are downloaded. To avoid these initial (alarming) estimates, there are techniques one could try like suppressing display of the first n estimates, or waiting for the delta between estimates to drop below some threshold before you start displaying them, but these don't seem like a general, robust solution. There are corner cases involving too few samples, or samples that actually are wildly varying... I think I recall a general solution for this kind of thing in mathematics (statistics?) that reduced or eliminated these wild values. Does anyone know?

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • Algorithm to split an array into N groups based on item index (should be something simple)

    - by serg
    I feel that it should be something very simple and obvious but just stuck on this for the last half an hour and can't move on. All I need is to split an array of elements into N groups based on element index. For example we have an array of 30 elements [e1,e2,...e30], that has to be divided into N=3 groups like this: group1: [e1, ..., e10] group2: [e11, ..., e20] group3: [e21, ..., e30] I came up with nasty mess like this for N=3 (pseudo language, I left multiplication on 0 and 1 just for clarification): for(i=0;i<array_size;i++) { if(i>=0*(array_size/3) && i<1*(array_size/3) { print "group1"; } else if(i>=1*(array_size/3) && i<2*(array_size/3) { print "group2"; } else if(i>=2*(array_size/3) && i<3*(array_size/3) print "group3"; } } But what would be the proper general solution? Thanks.

    Read the article

  • Sorting data by relevance, from multiple tables

    - by Oden
    Hey, How is it possible to sort data from multiple tables by relevance? My table structure is following: I have 3 tables in my database, one table contains the name of solar systems, the second for e.g. of planets. There is one more table, witch is a connection between solar systems and planets. If I want to get data of a planet, witch is in the Milky Way, i post this data to the server, and it gives me a multi-dimensional array witch contains: The Milky Way, with every planet in it Every planet, witch name contains the string Milky Way (maybe thats a bat example because i don't think that theres but one planet with this name, but the main concept is on file) But, i want to set the most relevant restaurants to the top of the array. (for the relevance i would check the description of the restaurants or something like that) So, how would you do that kind of data sorting?

    Read the article

  • algorithm advice for finding maximum items within a time period

    - by darren
    Hi everyone. I have a database schema that is similar to the following: | User | Event | Date |--------|---------------|------ | 111 | Walked dog | 2009-10-1 | 222 | Walked dog | 2009-10-2 | 333 | Fed Fish | 2009-10-5 | 222 | Did Laundry | 2009-10-6 | 111 | Fed Fish | 2009-10-7 | 111 | Walked dog | 2009-10-18 | 222 | Walked dog | 2009-10-19 | 111 | Fed Fish | 2009-10-21 I would like to produce a query that returns the maximum number of times a user performs some action within a time period. For example, given a time period of 5 days, what is the maximum number of times user 111 walked the dog? The most obvious solution would be to start at some zero point and move forward each day, summing up 5 day periods along the way, then taking the maximum total out of all the 5 day windows. the approach seems incredibly costly however. I would appreciate any suggestions you may have.

    Read the article

  • A graph-based tuple merge?

    - by user1644030
    I have paired values in tuples that are related matches (and technically still in CSV files). Neither of the paired values are necessarily unique. tupleAB = (A####, B###), (A###, B###), (A###, B###)... tupleBC = (B####, C###), (B###, C###), (B###, C###)... tupleAC = (A####, C###), (A###, C###), (A###, C###)... My ideal output would be a dictionary with a unique ID and a list of "reinforced" matches. The way I try to think about it is in a graph-based context. For example, if: tupleAB[x] = (A0001, B0012) tupleBC[y] = (B0012, C0230) tupleAC[z] = (A0001, C0230) This would produce: output = {uniquekey0001, [A0001, B0012, C0230]} Ideally, this would also be able to scale up to more than three tuples (for example, adding a "D" match that would result in an additional three tuples - AD, BD, and CD - and lists of four items long; and so forth). In regards to scaling up to more tuples, I am open to having "graphs" that aren't necessarily fully connected, i.e., every node connected to every other node. My hunch is that I could easily filter based on the list lengths. I am open to any suggestions. I think, with a few cups of coffee, I could work out a brute force solution, but I thought I'd ask the community if anyone was aware of a more elegant solution. Thanks for any feedback.

    Read the article

  • looking for a set union find algorithm

    - by Mig
    I have thousands of lines of 1 to 100 numbers, every line define a group of numbers and a relationship among them. I need to get the sets of related numbers. Little Example: If I have this 7 lines of data T1 T2 T3 T4 T5 T6 T1 T5 T4 T3 T4 I need a not so slow algorith to know that the sets here are: T1 T2 T6 (because T1 is related with T2 in the first line and T1 related with T6 in the line 5) T3 T4 T5 (because T5 is with T4 in line 6 and T3 is with T4 in line 7) but when you have very big sets is painfully slow to do a search of a T(x) in every big set, and do unions of sets... etc. Do you have a hint to do this in a not so brute force manner? I'm trying to do this in python. Thanks

    Read the article

  • Operant conditioning algorithm?

    - by Ken
    What's the best way to implement real time operant conditioning (supervised reward/punishment-based learning) for an agent? Should I use a neural network (and what type)? Or something else? I want the agent to be able to be trained to follow commands like a dog. The commands would be in the form of gestures on a touchscreen. I want the agent to be able to be trained to follow a path (in continuous 2D space), make behavioral changes on command (modeled by FSM state transitions), and perform sequences of actions. The agent would be in a simulated physical environment.

    Read the article

  • decoding algorithm wanted

    - by Horace Ho
    I receive encoded PDF files regularly. The encoding works like this: the PDFs can be displayed correctly in Acrobat Reader select all and copy the test via Acrobat Reader and paste in a text editor will show that the content are encoded so, examples are: 13579 -> 3579; hello -> jgnnq it's basically an offset (maybe swap) of ASCII characters. The question is how can I find the offset automatically when I have access to only a few samples. I cannot be sure whether the encoding offset is changed. All I know is some text will usually (if not always) show up, e.g. "Name:", "Summary:", "Total:", inside the PDF. Thank you!

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • Algorithm Question Maximize Average of Functions

    - by paradoxperfect
    Hello, I have a set of N functions each denoted by Fi(h). Each function returns some value when given an h. I'm trying to figure out a way to maximize the average of all of the functions given some total H value. For example, say each function represents a grade on an assignment. If I spend h hours on assignment i, I will get g = Fi(h) as my grade. I'm given H hours to finish all of the assignments. I want to maximize my average grade for all assignments. Can anyone point me in the right direction to figure this out?

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • what is the easiest way to do this function in c# ?

    - by From.ME.to.YOU
    Hello let say that we have an array [5,5] 01,02,03,04,05 06,07,08,09,10 11,12,13,14,15 16,17,18,19,20 21,22,23,24,25 the user should send 2 values to the function (start,searchFOR) for example (13,25) the function should search for that value in this way 07,08,09 12, ,14 17,18,19 if the value is n't found in this level it will goes a level higher 01,02,03,04,05 06, , , ,10 11, , , ,15 16, , , ,20 21,22,23,24,25 if the array is bigger than this and the value didn't found it will go to a level higher Thanks for your help

    Read the article

  • algorithm to find Best 8 minute window in a 1 hour run

    - by Arun
    I have a requirement like, an activity runs for about more than an hour. I need to get the best 8 minute window where some parameters are maximum. say a value x, which is dynamic for every second. if my activity runs for one hr,i get 3600 values for x. I need to find the best continuous 8 minute time interval where x value was the highest among all the others. if i capture say from 0th minute to 8th minute, there may be another time frame like 0.4 to 8.4 where it was maximum. the granularity is one second. every second we need to consider. basically the peak 8 minute window where x was maximum. please help me with the design

    Read the article

  • Radial Grid Search Algorithm

    - by grey
    I'm sure there's a clean way to do this, but I'm probably not using the right keywords for find it. So let's say I have a grid. Starting from a position on the grid, return all of the grid coordinates that fall within a given distance. So I call something like: getCoordinates( currentPosition, distance ) And for each coordinate, starting from the initial position, add all cardinal directions, and then add the spaces around those and so forth until the distance is reached. I imagine that on a grid this would look like a diamond.

    Read the article

  • Algorithm for Determining Variations of Differing Lengths

    - by joseph.ferris
    I have four objects - for the sake of arguments, let say that they are the following letters: A B C D I need to calculate the number of variations that can be made for these under the following two conditions: No repetition Objects are position agnostic Taking the above, this means that with a four object sequence, I can have only one sequence that matches the criteria (since order is not considered for being unique): ABCD There are four variations for a three object combination from the four object pool: ABC, ABD, ACD, and BCD There are six variations for a two object combination from the four object pool: AB, AC, AD, BC, BD, and CD And the most simple one, if taken on at a time: A, B, C, and D I swear that this was something covered in school, many, many years ago - and probably forgotten since I didn't think I would use it. :-) I am anticipating that factorials will come into play, but just trying to force an equation is not working. Any advice would be appreciated.

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

  • Algorithm for generating an array of non-equal costs for a transport problem optimization

    - by Carlos
    I have an optimizer that solves a transportation problem, using a cost matrix of all the possible paths. The optimiser works fine, but if two of the costs are equal, the solution contains one more path that the minimum number of paths. (Think of it as load balancing routers; if two routes are same cost, you'll use them both.) I would like the minimum number of routes, and to do that I need a cost matrix that doesn't have two costs that are equal within a certain tolerance. At the moment, I'm passing the cost matrix through a baking function which tests every entry for equality to each of the other entries, and moves it a fixed percentage if it matches. However, this approach seems to require N^2 comparisons, and if the starting values are all the same, the last cost will be r^N bigger. (r is the arbitrary fixed percentage). Also there is the problem that by multiplying by the percentage, you end up on top of another value. So the problem seems to have an element of recursion, or at least repeated checking, which bloats the code. The current implementation is basically not very good (I won't paste my GOTO-using code here for you all to mock), and I'd like to improve it. Is there a name for what I'm after, and is there a standard implementation? Example: {1,1,2,3,4,5} (tol = 0.05) becomes {1,1.05,2,3,4,5}

    Read the article

  • K-nearest neighbour with closed-loop dimensions

    - by Tomas
    Hi, I've got a K-nearest neighbour problem where some of the dimensions are closed loops. For example one is 'time of day' and I'm matching for similarity so 'very early morning' is close to 'late evening', you can't just make it a linear scale from 'very early morning' at one end to 'late evening' at the other. How can I represent this in the data model? Is there an established way to handle this or a way to work around it?

    Read the article

  • Any better algorithm possible here?

    - by Cupidvogel
    I am trying to solve this problem in Python. Noting that only the first kiss requires the alternation, any kiss that is not a part of the chain due to the first kiss can very well have a hug on the 2nd next person, this is the code I have come up with. This is just a simple mathematical calculation, no looping, no iteration, nothing. But still I am getting a timed-out message. Any means to optimize it? import psyco psyco.full() testcase = int(raw_input()) for i in xrange(0,testcase): n = int(raw_input()) if n%2: m = n/2; ans = 2 + 4*(2**m-1); ans = ans%1000000007; print ans else: m = n/2 - 1 ans = 2 + 2**(n/2) + 4*(2**m-1); ans = ans%1000000007 print ans

    Read the article

  • Algorithm complexity question

    - by Itsik
    During a recent job interview, I was asked to give a solution to the following problem: Given a string s (without spaces) and a dictionary, return the words in the dictionary that compose the string. For example, s= peachpie, dic= {peach, pie}, result={peach, pie}. I will ask the the decision variation of this problem: if s can be composed of words in the dictionary return yes, otherwise return no. My solution to this was in backtracking (written in Java) public static boolean words(String s, Set<String> dictionary) { if ("".equals(s)) return true; for (int i=0; i <= s.length(); i++) { String pre = prefix(s,i); // returns s[0..i-1] String suf = suffix(s,i); // returns s[i..s.len] if (dictionary.contains(pre) && words(suf, dictionary)) return true; } return false; } public static void main(String[] args) { Set<String> dic = new HashSet<String>(); dic.add("peach"); dic.add("pie"); dic.add("1"); System.out.println(words("peachpie1", dic)); // true System.out.println(words("peachpie2", dic)); // false } What is the time complexity of this solution? I'm calling recursively in the for loop, but only for the prefix's that are in the dictionary. Any idea's?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >