Search Results

Search found 8262 results on 331 pages for 'optimization algorithm'.

Page 21/331 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Find point which sum of distances to set of other points is minimal

    - by Pawel Markowski
    I have one set (X) of points (not very big let's say 1-20 points) and the second (Y), much larger set of points. I need to choose some point from Y which sum of distances to all points from X is minimal. I came up with an idea that I would treat X as a vertices of a polygon and find centroid of this polygon, and then I will choose a point from Y nearest to the centroid. But I'm not sure whether centroid minimizes sum of its distances to the vertices of polygon, so I'm not sure whether this is a good way? Is there any algorithm for solving this problem? Points are defined by geographical coordinates.

    Read the article

  • Who owes who money optimisation problem

    - by Francis
    Say you have n people, each who owe each other money. In general it should be possible to reduce the amount of transactions that need to take place. i.e. if X owes Y £4 and Y owes X £8, then Y only needs to pay X £4 (1 transaction instead of 2). This becomes harder when X owes Y, but Y owes Z who owes X as well. I can see that you can easily calculate one particular cycle. It helps for me when I think of it as a fully connected graph, with the nodes being the amount each person owes. Problem seems to be NP-complete, but what kind of optimisation algorithm could I make, nevertheless, to reduce the total amount of transactions? Doesn't have to be that efficient, as N is quite small for me.

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • faster implementation of sum ( for Codility test )

    - by Oscar Reyes
    How can the following simple implementation of sum be faster? private long sum( int [] a, int begin, int end ) { if( a == null ) { return 0; } long r = 0; for( int i = begin ; i < end ; i++ ) { r+= a[i]; } return r; } EDIT Background is in order. Reading latest entry on coding horror, I came to this site: http://codility.com which has this interesting programming test. Anyway, I got 60 out of 100 in my submission, and basically ( I think ) is because this implementation of sum, because those parts where I failed are the performance parts. I'm getting TIME_OUT_ERROR's So, I was wondering if an optimization in the algorithm is possible. So, no built in functions or assembly would be allowed. This my be done in C, C++, C#, Java or pretty much in any other. EDIT As usual, mmyers was right. I did profile the code and I saw most of the time was spent on that function, but I didn't understand why. So what I did was to throw away my implementation and start with a new one. This time I've got an optimal solution [ according to San Jacinto O(n) -see comments to MSN below - ] This time I've got 81% on Codility which I think is good enough. The problem is that I didn't take the 30 mins. but around 2 hrs. but I guess that leaves me still as a good programmer, for I could work on the problem until I found an optimal solution: Here's my result. I never understood what is those "combinations of..." nor how to test "extreme_first"

    Read the article

  • Finding perfect numbers in C# (optimization)

    - by paradox
    I coded up a program in C# to find perfect numbers within a certain range as part of a programming challenge . However, I realized it is very slow when calculating perfect numbers upwards of 10000. Are there any methods of optimization that exist for finding perfect numbers? My code is as follows: using System; using System.Collections.Generic; using System.Linq; namespace ConsoleTest { class Program { public static List<int> FindDivisors(int inputNo) { List<int> Divisors = new List<int>(); for (int i = 1; i<inputNo; i++) { if (inputNo%i==0) Divisors.Add(i); } return Divisors; } public static void Main(string[] args) { const int limit = 100000; List<int> PerfectNumbers = new List<int>(); List<int> Divisors=new List<int>(); for (int i=1; i<limit; i++) { Divisors = FindDivisors(i); if (i==Divisors.Sum()) PerfectNumbers.Add(i); } Console.Write("Output ="); for (int i=0; i<PerfectNumbers.Count; i++) { Console.Write(" {0} ",PerfectNumbers[i]); } Console.Write("\n\n\nPress any key to continue . . . "); Console.ReadKey(true); } } }

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • Optimization of a c++ matrix/bitmap class

    - by Andrew
    I am searching a 2D matrix (or bitmap) class which is flexible but also fast element access. The contents A flexible class should allow you to choose dimensions during runtime, and would look something like this (simplified): class Matrix { public: Matrix(int w, int h) : data(new int[x*y]), width(w) {} void SetElement(int x, int y, int val) { data[x+y*width] = val; } // ... private: // symbols int width; int* data; }; A faster often proposed solution using templates is (simplified): template <int W, int H> class TMatrix { TMatrix() data(new int[W*H]) {} void SetElement(int x, int y, int val) { data[x+y*W] = val; } private: int* data; }; This is faster as the width can be "inlined" in the code. The first solution does not do this. However this is not very flexible anymore, as you can't change the size anymore at runtime. So my question is: Is there a possibility to tell the compiler to generate faster code (like when using the template solution), when the size in the code is fixed and generate flexible code when its runtime dependend? I tried to achieve this by writing "const" where ever possible. I tried it with gcc and VS2005, but no success. This kind of optimization would be useful for many other similar cases.

    Read the article

  • Tetris Piece Rotation Algorithm

    - by coppercoder
    What are the best algorithms (and explanations) for representing and rotating the pieces of a tetris game? I always find the piece rotation and representation schemes confusing. Most tetris games seem to use a naive "remake the array of blocks" at each rotation: http://www.codeplex.com/Project/ProjectDirectory.aspx?ProjectSearchText=tetris However, some use pre-built encoded numbers and bit shifting to represent each piece: http://www.codeplex.com/wintris Is there a method to do this using mathematics (not sure that would work on a cell based board)?

    Read the article

  • Probability algorithm: Finding probable correct item in a list (e.g John, John, Jon)

    - by Andrew White
    Hi, Take for example the list (L): John, John, John, John, Jon We are to presume one item is to be correct (e.g. John in this case), and give a probability it is correct. First (and good!) attempt: MostFrequentItem(L).Count / L.Count (e.g. 4/5 or 80% likelihood) But consider the cases: John, John, Jon, Jonny John, John, Jon, Jon I want to consider the likelihood of the correct item being John to be higher in the first list! I know I have to count the SecondMostFrequent Item and compare them. Any ideas? This is really busting my brain! Thx, Andrew

    Read the article

  • Mysql Algorithm for Determining Closest Colour Match

    - by buggedcom
    I'm attempting to create a true mosaic application. At the moment I have one mosaic image, ie the one the mosaic is based on and about 4000 images from my iPhoto library that act as the image library. I have already done my research and analysed the mosaic image. I've converted it into 64x64 slices each of 8 pixels. I've calculated the average colour for each slice and assertain the r, g, b and brightness (Luminance (perceived option 1) = (0.299*R + 0.587*G + 0.114*B)) value. I have done the same for each of the image library photos. The mosaic slices table looks like so. slice_id, slice_image_id, slice_slice_id, slice_image_column, slice_image_row, slice_colour_hex, slice_rgb_red, slice_rgb_blue, slice_rgb_green, slice_rgb_brightness The image library table looks like so. upload_id, upload_file, upload_colour_hex, upload_rgb_red, upload_rgb_green, upload_rgb_blue, upload_rgb_brightness So basically I'm reading the image slices from the slices table into PHP and then pulling out the appropriate images from the library table based on the colour hexs. My trouble is that I've been on this too long and probably had too many energy drinks so am not concentrating properly, I can't figure out the way to pick out the nearest colour neighbor if the appropriate hex code doesn't exist. Any ideas on the perfect query? NB: I know pulling out the slices one by one is not ideal however the mosaic is only rebuilt periodically so a sudden burst in the mysql load doesn't really bother me, however if there us a way to pull the images out all at once that would also be a massive bonus.

    Read the article

  • Working out global tab order algorithmically?

    - by Mrgreen
    We have a proprietry system where we can configure fields on indiviual forms. However these fields have a global tab order (we cannot specify for a specific form). We have a bunch of forms (35 in total) which share a lot of different fields. Each form has a specific tab/edit order that needs to be configured. Example: Form 1 has fields A,B,C,D in that order. Form 2 has fields E,F,G,A in that order. Form 3 has fields E,B,H,I in that order. The global tab orders would be E,F,G,A,B,C,D,H,I Notice how A needs to come before B yet after G. Is there any easy way to work this out using the tab order lists for each form? I need to merge this tab order information into a single global tab order list. I have over 200 fields in total and it is near impossible to do by hand.

    Read the article

  • Algorithm shortest path between all points

    - by Jeroen
    Hi, suppose I have 10 points. I know the distance between each point. I need to find the shortest possible route passing trough all points. I have tried a couple of algorithms (Dijkstra, Floyd Warshall,...) and the all give me the shortest path between start and end, but they don't make a route with all points on it. Permutations work fine, but they are to resource expensive. What algorithms can you advise me to look into for this problem? Or is there a documented way to do this with the above mentioned algorithms? Tnx Jeroen

    Read the article

  • algorithm analysis - orders of growth question

    - by cchampion
    I'm studing orders of growth "big oh", "big omega", and "big theta". Since I can't type the little symbols for these I will denote them as follows: ORDER = big oh OMEGA = big omega THETA = big theta For example I'll say n = ORDER(n^2) to mean that the function n is in the order of n^2 (n grows at most as fast n^2). Ok for the most part I understand these: n = ORDER(n^2) //n grows at most as fast as n^2 n^2 = OMEGA(n) //n^2 grows atleast as fast as n 8n^2 + 1000 = THETA(n^2) //same order of growth Ok here comes the example that confuses me: what is n(n+1) vs n^2 I realize that n(n+1) = n^2 + n; I would say it has the same order of growth as n^2; therefore I would say n(n+1) = THETA(n^2) but my question is, would it also be correct to say: n(n+1) = ORDER(n^2) please help because this is confusing to me. thanks.

    Read the article

  • Looking for ideas for a simple pattern matching algorithm to run on a microcontroller

    - by pic_audio
    I'm working on a project to recognize simple audio patterns. I have two data sets, each made up of between 4 and 32 note/duration pairs. One set is predefined, the other is from an incoming data stream. The length of the two strongly correlated data sets is often different, but roughly the same "shape". My goal is to come up with some sort of ranking as to how well the two data sets correlate/match. I have converted the incoming frequencies to pitch and shifted the incoming data stream's pitch so that it's average pitch matches that of the predefined data set. I also stretch/compress the incoming data set's durations to match the overall duration of the predefined set. Here are two graphical examples of data that should be ranked as strongly correlated: http://s2.postimage.org/FVeG0-ee3c23ecc094a55b15e538c3a0d83dd5.gif (Sorry, as a new user I couldn't directly post images) I'm doing this on a 8-bit microcontroller so resources are minimal. Speed is less an issue, a second or two of processing isn't a deal breaker. It wouldn't surprise me if there is an obvious solution, I've just been staring at the problem too long. Any ideas? Thanks in advance...

    Read the article

  • What is an efficient way to write password cracking algorithm (python)

    - by Luminance
    This problem might be relatively simple, but I'm given two text files. One text file contains all encrypted passwords encrypted via crypt.crypt in python. The other list contains over 400k+ normal dictionary words. The assignment is that given 3 different functions which transform strings from their normal case to all different permutations of capitalizations, transforms a letter to a number (if it looks alike, e.g. G - 6, B - 8), and reverses a string. The thing is that given the 10 - 20 encrypted passwords in the password file, what is the most efficient way to get the fastest running solution in python to run those functions on dictionary word in the words file? It is given that all those words, when transformed in whatever way, will encrypt to a password in the password file. Here is the function which checks if a given string, when encrypted, is the same as the encrypted password passed in: def check_pass(plaintext,encrypted): crypted_pass = crypt.crypt(plaintext,encrypted) if crypted_pass == encrypted: return True else: return False Thanks in advance.

    Read the article

  • Algorithm for converting hierarchical flat data (w/ ParentID) into sorted flat list w/ indentation l

    - by eagle
    I have the following structure: MyClass { guid ID guid ParentID string Name } I'd like to create an array which contains the elements in the order they should be displayed in a hierarchy (e.g. according to their "left" values), as well as a hash which maps the guid to the indentation level. For example: ID Name ParentID ------------------------ 1 Cats 2 2 Animal NULL 3 Tiger 1 4 Book NULL 5 Airplane NULL This would essentially produce the following objects: // Array is an array of all the elements sorted by the way you would see them in a fully expanded tree Array[0] = "Airplane" Array[1] = "Animal" Array[2] = "Cats" Array[3] = "Tiger" Array[4] = "Book" // IndentationLevel is a hash of GUIDs to IndentationLevels. IndentationLevel["1"] = 1 IndentationLevel["2"] = 0 IndentationLevel["3"] = 2 IndentationLevel["4"] = 0 IndentationLevel["5"] = 0 For clarity, this is what the hierarchy looks like: Airplane Animal Cats Tiger Book I'd like to iterate through the items the least amount of times possible. I also don't want to create a hierarchical data structure. I'd prefer to use arrays, hashes, stacks, or queues. The two objectives are: Store a hash of the ID to the indentation level. Sort the list that holds all the objects according to their left values. When I get the list of elements, they are in no particular order. Siblings should be ordered by their Name property. Update: This may seem like I haven't tried coming up with a solution myself and simply want others to do the work for me. However, I have tried coming up with three different solutions, and I've gotten stuck on each. One reason might be that I've tried to avoid recursion (maybe wrongly so). I'm not posting the partial solutions I have so far since they are incorrect and may badly influence the solutions of others.

    Read the article

  • Algorithm for performing decentralized search in social networks

    - by Jack
    I want to find out all the existing decentralized algorithms that exploit the structural properties of social networks. So far I know the following algorithms - 1) Best connected search - Adamic et al 2) Random Walk (does not exploit any structural property but still it is decentralized) 3) Hamming distance search 4) Weak/Strong tie search Any help would be appreciated

    Read the article

  • Fast file search algorithm for IP addresses

    - by Dave Jarvis
    Question What is the fastest way to find if an IP address exists in a file that contains IP addresses sorted as: 219.93.88.62 219.94.181.87 219.94.193.96 220.1.72.201 220.110.162.50 220.126.52.187 220.126.52.247 Constraints No database (e.g., MySQL, PostgreSQL, Oracle, etc.). Infrequent pre-processing is allowed (see possibilities section) Would be nice not to have to load the file each query (131Kb) Uses under 5 megabytes of disk space File Details One IP address per line 9500+ lines Possible Solutions Create a directory hierarchy (radix tree?) then use is_dir() (sadly, this uses 87 megabytes)

    Read the article

  • Algorithm and data structure learning resources for dynamic programming

    - by Pranav
    Im learning dynamic programming now, and while I know the theory well, designing DP algorithms for new problems is still difficult. This is what i would really like now- A book or a website, which poses a problem which can be solved by dynamic programming. Also there is the solution with an explanation available, which i would like to see if i cant solve the problem even after butting my head at it for a few hours. Is there some resource that provides this sort of a thing for several categories of algorithms- like graph algorithms, dynamic programming, etc? P.S. I considered Topcoder, but the solutions there are not really appropriate for learning to implement efficient solutions.

    Read the article

  • box stacking in graph theory

    - by mozhdeh
    Please help me find a good solution for this problem. We have n boxes with 3 dimensions. We can orient them and we want to put them on top of another to have a maximun height. We can put a box on top of an other box, if 2 dimensions (width and lenght) are lower than the dimensions of the box below. For exapmle we have 3 dimensions w*D*h, we can show it in to (h*d,d*h,w*d,d*W,h*w,w*h) please help me to solve it in graph theory.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >