Search Results

Search found 1889 results on 76 pages for 'evolutionary algorithms'.

Page 6/76 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • How does Comparison Sites work?

    - by Vijay
    Need your thinking on how does these Comparision Sites actually work. Sites like Junglee.com policybazaar.com and there are many like these which provides comaprision of products , fares etc. grabbed from different websites. I had read a little about it and what i found is-: These sites uses Feeds of the sites data. These sites uses APIs of the sites which are actually provided by those sites. And for some sites which do not have any of these two posibility then the Comparision sites uses web-crawler to crawl their data. This is what i have found out. If you think there is more things to it please do give your own views. But i want to know these for my learning purpose and a little for curiosity- how does they actually matches the crawled data , feeds, and other so that there is no duplicacy. What is the process or algorithms for it. And where should i go to learn these concepts. References for books , articles or anything else.

    Read the article

  • Parsing scripts that use curly braces

    - by Keikoku
    To get an idea of what I'm doing, I am writing a python parser that will parse directx .x text files. The problem I have deals with how the files are formatted. Although I'm writing it in python, I'm looking for general algorithms for dealing with this sort of parsing. .x files define data using templates. The format of a template is template_name { [some_data] } The goal I have is to parse the file line-by-line and whenever I come across a template, I will deal with it accordingly. My initial approach was to check if a line contains an opening or closing brace. If it's an open brace, then I will check what the template name is. Now the catch here is that the open brace doesn't have to occur on the same line as the template name. It could just as well be template_name { [some_data] } So if I were to use my "open brace exists" criteria, it won't work for any files that use the latter format. A lot of languages also use curly braces (though I'm not sure when people would be parsing the scripts themselves), so I was wondering if anyone knows how to accurately get the template name (or in some other languages, it could just as well be a function name, though there aren't any keywords to look for)

    Read the article

  • How do graphics programmers deal with rendering vertices that don't change the image?

    - by canisrufus
    So, the title is a little awkward. I'll give some background, and then ask my question. Background: I work as a web GIS application developer, but in my spare time I've been playing with map rendering and improving data interchange formats. I work only in 2D space. One interesting issue I've encountered is that when you're rendering a polygon at a small scale (zoomed way out), many of the vertices are redundant. An extreme case would be that you have a polygon with 500,000 vertices that only takes up a single pixel. If you're sending this data to the browser, it would make sense to omit ~499,999 of those vertices. One way we achieve that is by rendering an image on a server and and sending it as a PNG: voila, it's a point. Sometimes, though, we want data sent to the browser where it can be rendered with SVG (or canvas, or webgl) so that it can be interactive. The problem: It turns out that, using modern geographic data sets, it's very easy to overload SVG's rendering abilities. In an effort to cope with those limitations, I'm trying to figure out how to visually losslessly reduce a data set for a given scale and map extent (and, if necessary, for a known map pixel width and height). I got a great reduction in data size just using the Douglas-Peucker algorithm, and I believe I was able to get it to keep the polygons true to within one pixel. Unfortunately, Douglas-Peucker doesn't preserve topology, so it changed how borders between polygons got rendered. I couldn't readily find other algorithms to try out and adapt to the purpose, but I don't have much CS/algorithm background and might not recognize them if I saw them.

    Read the article

  • Why is Quicksort called "Quicksort"?

    - by Darrel Hoffman
    The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"? Sure, it's "quick", most of the time, but not always. The possibility of degenerating to O(N^2) is well known. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O(n log n) aren't generally called Quicksort anymore. (e.g. Introsort). I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it (usually) is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc. They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice. Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first". Mergesort was developed 15 years before Quicksort. (1945 and 1960 respectively according to Wikipedia) I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing?

    Read the article

  • How to Detect Sprites in a SpriteSheet?

    - by IAE
    I'm currently writing a Sprite Sheet Unpacker such as Alferds Spritesheet Unpacker. Now, before this is sent to gamedev, this isn't necessarily about games. I would like to know how to detect a sprite within a spriitesheet, or more abstactly, a shape inside of an image. Given this sprite sheet: I want to detect and extract all individual sprites. I've followed the algorithm detailed in Alferd's Blog Post which goes like: Determine predominant color and dub it the BackgroundColor Iterate over each pixel and check ColorAtXY == BackgroundColor If false, we've found a sprite. Keep going right until we find a BackgroundColor again, backtrack one, go down and repeat until a BackgroundColor is reached. Create a box from location to ending location. Repeat this until all sprites are boxed up. Combined overlapping boxes (or within a very short distance) The resulting non-overlapping boxes should contain the sprite. This implementation is fine, especially for small sprite sheets. However, I find the performance too poor for larger sprite sheets and I would like to know what algorithms or techniques can be leveraged to increase the finding of sprites. A second implementation I considered, but have not tested yet, is to find the first pixel, then use a backtracking algorithm to find every connected pixel. This should find a contiguous sprite (breaks down if the sprite is something like an explosion where particles are no longer part of the main sprite). The cool thing is that I can immediately remove a detected sprite from the sprite sheet. Any other suggestions?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • What is wrong with my logic for the divide and conquer algorithm for Closest pair problem?

    - by Programming Noob
    I have been following Coursera's course on Algorithms and came up with a thought about the divide/conquer algorithm for the closest pair problem, that I want clarified. As per Prof Roughgarden's algorithm (which you can see here if you're interested): For a given set of points P, of which we have two copies - sorted in X and Y direction - Px and Py, the algorithm can be given as closestPair(Px,Py): Divide points into left half - Q, and right half - R, and form sorted copies of both halves along x and y directions - Qx,Qy,Rx,Ry Let closestPair(Qx,Qy) be points p1 and q1 Let closestPair(Rx,Ry) be p2,q2 Let delta be minimum of dist(p1,q1) and dist(p2,q2) This is the unfortunate case, let p3,q3 be the closestSplitPair(Px,Py,delta) Return the best result Now, the clarification that I want is related to step 5. I should say this beforehand, that what I'm suggesting, is barely any improvement at all, but if you're still interested, read ahead. Prof R says that since the points are already sorted in X and Y directions, to find the best pair in step 5, we need to iterate over points in the strip of width 2*delta, starting from bottom to up, and in the inner loop we need only 7 comparisions. Can this be bettered to just one? How I think is possible seemed a little difficult to explain in plain text, so I drew a diagram and wrote it on paper and uploaded it here: Since no one else came up with is, I'm pretty sure there's some error in my line of thought. But I have literally been thinking about this for HOURS now, and I just HAD to post this. It's all that is in my head. Can someone point out where I'm going wrong?

    Read the article

  • Beginners guide to developing optimization software

    - by Florenc
    I am novice in "serious" programming i.e. applications that deal with real-life applications and software projects that go beyond school assignments. My interests include optimization, operations research, algorithms and lately i discovered how much I do like software design/development/engineering. I have already developed some simple desktop applications for some "famous" problems like TSP using heuristc approaches, a VRP solver (in progress) and so on. While developing this kind of software I actually used basic concepts taught at school such as object-orientation analysis and design. But, I found these courses rather elementary and quite boring (for my expectations). So I decided to go a little further and start developing "real" software (and this is where I realized how important and interesting software engineering/design is.) Now, here's my issue: I can not find a "study guide" for developing software of this kind. Currently, there are numerous resources out there (books, websites, tutorials) in designing and developing complex IS, web applications, smartphone apps but I can't find a book for example entitled "optimization software development". Definetly, someone could claim that "design patterns apply to software in general" but that's not my point. My point is that I could simply use my imagination for "simple" implementations, but what happens, when my imagination can not go further? In other words I'm looking for a guide/path to bridge the gap between: Mathematics-Algorithm Design-Software Engineering-Optimization-Software development

    Read the article

  • Trying to sort the coefficients of the polynomial (z-a)(z-b)(z-c)...(z-n) into a vector

    - by pajamas
    So I have a factored polynomial of the form (z-a)(z-b)(z-c)...(z-n) for n an even positive integer. Thus the coefficient of z^k for 0 <= k < n will be the sum of all distinct n-k element products taken from the set {a,b,...,n} multiplied by (-1)^k, I hope that makes sense, please ask if you need more clarification. I'm trying to put these coefficients into a row vector with the first column containing the constant coefficient (which would be abc...n) and the last column containing the coefficient for z^n (which would be 1). I imagine there is a way to brute force this with a ton of nested loops, but I'm hoping there is a more efficient way. This is being done in Matlab (which I'm not that familiar with) and I know Matlab has a ton of algorithms and functions, so maybe its got something I can use. Can anyone think of a way to do this? Example: (z-1)(z-2)(z-3) = z^3 - (1 + 2 + 3)z^2 + (1*2 + 1*3 + 2*3)z - 1*2*3 = z^3 - 6z^2 + 11z - 6. Note that this example is n=3 odd, but n=4 would have taken too long to do by hand. Edit: Let me know if you think this would be better posted at TCS or Math Stack Exchange.

    Read the article

  • Count unique visitors by group of visited places

    - by Mathieu
    I'm facing the problem of counting the unique visitors of groups of places. Here is the situation: I have visitors that can visit places. For example, that can be internet users visiting web pages, or customers going to restaurants. A visitor can visit as much places as he wishes, and a place can be visited by several visitors. A visitor can come to the same place several times. The places belong to groups. A group can obviously contain several places, and places can belong to several groups. Given that, for each visitor, we can have a list of visited places, how can I have the number of unique visitors per group of places? Example: I have visitors A, B, C and D; and I have places x, y and z. I have these visiting lists: [ A -> [x,x,y,x], B -> [], C -> [z,z], D -> [y,x,x,z] ] Having these number of unique visitors per place is quite easy: [ x -> 2, // A and D visited x y -> 2, // A and D visited y z -> 2 // C and D visited z ] But if I have these groups: [ G1 -> [x,y,z], G2 -> [x,z], G3 -> [x,y] ] How can I have this information? [ G1 -> 3, // A, C and D visited x or y or z G2 -> 3, // A, C and D visited x or z G3 -> 2 // A and D visited x or y ] Additional notes : There are so many places that it is not possible to store information about every possible group; It's not a problem if approximation are made. I don't need 100% precision. Having a fast algorithm that tells me that there were 12345 visits in a group instead of 12543 is better than a slow algorithm telling the exact number. Let's say there can be ~5% deviation. Is there an algorithm or class of algorithms that addresses this type of problem?

    Read the article

  • Applying the Knuth-Plass algorithm (or something better?) to read two books with different length and amount of chapters in parallel

    - by user147133
    I have a Bible reading plan that covers the whole Bible in 180 days. For the most of the time, I read 5 chapters in the Old Testament and 1 or 2 (1.5) chapters in the New Testament each day. The problem is that some chapters are longer than others (for example Psalm 119 which is 7 times longer than a average chapter in the Bible), and the plan I'm following doesn't take that in count. I end up with some days having a lot more to read than others. I thought I could use programming to make myself a better plan. I have a datastructure with a list of all chapters in the bible and their length in number of lines. (I found that the number of lines is the best criteria, but it could have been number of verses or number of words as well) I then started to think about this problem as a line wrap problem. Think of a chapter like a word, a day like a line and the whole plan as a paragraph. The "length" of a word (a chapter) is the number of lines in that chapter. I could then generate the best possible reading plan by applying a simplified Knuth-Plass algorithm to find the best breakpoints. This works well if I want to read the Bible from beginning to end. But I want to read a little from the new testament each day in parallel with the old testament. Of course I can run the Knuth-Plass algorithm on the Old Testament first, then on the New Testament and get two separate plans. But those plans merged is not a optimal plan. Worst-case days (days with extra much reading) in the New Testament plan will randomly occur on the same days as the worst-case days in the Old Testament. Since the New Testament have about 180*1.5 chapters, the plan is generally to read one chapter the first day, two the second, one the third etc... And I would like the plan for the Old Testament to compensate for this alternating length. So I will need a new and better algorithm, or I will have to use the Knuth-Plass algorithm in a way that I've not figured out. I think this could be a interesting and challenging nut for people interested in algorithms, so therefore I wanted to see if any of you have a good solution in mind.

    Read the article

  • Generic Adjacency List Graph implementation

    - by DmainEvent
    I am trying to come up with a decent Adjacency List graph implementation so I can start tooling around with all kinds of graph problems and algorithms like traveling salesman and other problems... But I can't seem to come up with a decent implementation. This is probably because I am trying to dust the cobwebs off my data structures class. But what I have so far... and this is implemented in Java... is basically an edgeNode class that has a generic type and a weight-in the event the graph is indeed weighted. public class edgeNode<E> { private E y; private int weight; //... getters and setters as well as constructors... } I have a graph class that has a list of edges a value for the number of Vertices and and an int value for edges as well as a boolean value for whether or not it is directed. The brings up my first question, if the graph is indeed directed, shouldn't I have a value in my edgeNode class? Or would I just need to add another vertices to my LinkedList? That would imply that a directed graph is 2X as big as an undirected graph wouldn't it? public class graph { private List<edgeNode<?>> edges; private int nVertices; private int nEdges; private boolean directed; //... getters and setters as well as constructors... } Finally does anybody have a standard way of initializing there graph? I was thinking of reading in a pipe-delimited file but that is so 1997. public graph GenereateGraph(boolean directed, String file){ List<edgeNode<?>> edges; graph g; try{ int count = 0; String line; FileReader input = new FileReader("C:\\Users\\derekww\\Documents\\JavaEE Projects\\graphFile"); BufferedReader bufRead = new BufferedReader(input); line = bufRead.readLine(); count++; edges = new ArrayList<edgeNode<?>>(); while(line != null){ line = bufRead.readLine(); Object edgeInfo = line.split("|")[0]; int weight = Integer.parseInt(line.split("|")[1]); edgeNode<String> e = new edgeNode<String>((String) edges.add(e); } return g; } catch(Exception e){ return null; } } I guess when I am adding edges if boolean is true I would be adding a second edge. So far, this all depends on the file I write. So if I wrote a file with the following Vertices and weights... Buffalo | 18 br Pittsburgh | 20 br New York | 15 br D.C | 45 br I would obviously load them into my list of edges, but how can I represent one vertices connected to the other... so on... I would need the opposite vertices? Say I was representing Highways connected to each city weighted and un-directed (each edge is bi-directional with weights in some fictional distance unit)... Would my implementation be the best way to do that? I found this tutorial online Graph Tutorial that has a connector object. This appears to me be a collection of vertices pointing to each other. So you would have A and B each with there weights and so on, and you would add this to a list and this list of connectors to your graph... That strikes me as somewhat cumbersome and a little dismissive of the adjacency list concept? Am I wrong and that is a novel solution? This is all inspired by steve skiena's Algorithm Design Manual. Which I have to say is pretty good so far. Thanks for any help you can provide.

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • Parallelism in .NET – Part 12, More on Task Decomposition

    - by Reed
    Many tasks can be decomposed using a Data Decomposition approach, but often, this is not appropriate.  Frequently, decomposing the problem into distinctive tasks that must be performed is a more natural abstraction. However, as I mentioned in Part 1, Task Decomposition tends to be a bit more difficult than data decomposition, and can require a bit more effort.  Before we being parallelizing our algorithm based on the tasks being performed, we need to decompose our problem, and take special care of certain considerations such as ordering and grouping of tasks. Up to this point in this series, I’ve focused on parallelization techniques which are most appropriate when a problem space can be decomposed by data.  Using PLINQ and the Parallel class, I’ve shown how problem spaces where there is a collection of data, and each element needs to be processed, can potentially be parallelized. However, there are many other routines where this is not appropriate.  Often, instead of working on a collection of data, there is a single piece of data which must be processed using an algorithm or series of algorithms.  Here, there is no collection of data, but there may still be opportunities for parallelism. As I mentioned before, in cases like this, the approach is to look at your overall routine, and decompose your problem space based on tasks.  The idea here is to look for discrete “tasks,” individual pieces of work which can be conceptually thought of as a single operation. Let’s revisit the example I used in Part 1, an application startup path.  Say we want our program, at startup, to do a bunch of individual actions, or “tasks”.  The following is our list of duties we must perform right at startup: Display a splash screen Request a license from our license manager Check for an update to the software from our web server If an update is available, download it Setup our menu structure based on our current license Open and display our main, welcome Window Hide the splash screen The first step in Task Decomposition is breaking up the problem space into discrete tasks. This, naturally, can be abstracted as seven discrete tasks.  In the serial version of our program, if we were to diagram this, the general process would appear as: These tasks, obviously, provide some opportunities for parallelism.  Before we can parallelize this routine, we need to analyze these tasks, and find any dependencies between tasks.  In this case, our dependencies include: The splash screen must be displayed first, and as quickly as possible. We can’t download an update before we see whether one exists. Our menu structure depends on our license, so we must check for the license before setting up the menus. Since our welcome screen will notify the user of an update, we can’t show it until we’ve downloaded the update. Since our welcome screen includes menus that are customized based off the licensing, we can’t display it until we’ve received a license. We can’t hide the splash until our welcome screen is displayed. By listing our dependencies, we start to see the natural ordering that must occur for the tasks to be processed correctly. The second step in Task Decomposition is determining the dependencies between tasks, and ordering tasks based on their dependencies. Looking at these tasks, and looking at all the dependencies, we quickly see that even a simple decomposition such as this one can get quite complicated.  In order to simplify the problem of defining the dependencies, it’s often a useful practice to group our tasks into larger, discrete tasks.  The goal when grouping tasks is that you want to make each task “group” have as few dependencies as possible to other tasks or groups, and then work out the dependencies within that group.  Typically, this works best when any external dependency is based on the “last” task within the group when it’s ordered, although that is not a firm requirement.  This process is often called Grouping Tasks.  In our case, we can easily group together tasks, effectively turning this into four discrete task groups: 1. Show our splash screen – This needs to be left as its own task.  First, multiple things depend on this task, mainly because we want this to start before any other action, and start as quickly as possible. 2. Check for Update and Download the Update if it Exists - These two tasks logically group together.  We know we only download an update if the update exists, so that naturally follows.  This task has one dependency as an input, and other tasks only rely on the final task within this group. 3. Request a License, and then Setup the Menus – Here, we can group these two tasks together.  Although we mentioned that our welcome screen depends on the license returned, it also depends on setting up the menu, which is the final task here.  Setting up our menus cannot happen until after our license is requested.  By grouping these together, we further reduce our problem space. 4. Display welcome and hide splash - Finally, we can display our welcome window and hide our splash screen.  This task group depends on all three previous task groups – it cannot happen until all three of the previous groups have completed. By grouping the tasks together, we reduce our problem space, and can naturally see a pattern for how this process can be parallelized.  The diagram below shows one approach: The orange boxes show each task group, with each task represented within.  We can, now, effectively take these tasks, and run a large portion of this process in parallel, including the portions which may be the most time consuming.  We’ve now created two parallel paths which our process execution can follow, hopefully speeding up the application startup time dramatically. The main point to remember here is that, when decomposing your problem space by tasks, you need to: Define each discrete action as an individual Task Discover dependencies between your tasks Group tasks based on their dependencies Order the tasks and groups of tasks

    Read the article

  • Map Generation Algorithms for Minecraft Clone

    - by Danjen
    I'm making a Minecraft clone for the sake of it (with some inspriation from Dwarf Fortress) and had a few questions about the way the world generation is handled. Things I want it to cover: Biomes such as hills, mountains, forests, etc. Caves/caverns/tunnels Procedural (so it stretches to infinity... is wrap-around a possibility?) Breaking the map into smaller chunks Moddable (ie, new terrain types) Multiplayer compatible In particular, I've seen things such as Perlin Noise, Heightmaps, and Marching Cubes thrown around. These are like different tools to use, but I don't know when or why I would use them. Are there any other techniques that are useful for map generation? I realize this is borderline subjective and open-ended, but I am looking for some more insight into the processes involved.

    Read the article

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • most efficient AABB vs Ray collision algorithms

    - by Asher Einhorn
    Is there a known 'most efficient' algorithm for AABB vs Ray collision detection? I recently stumbled accross Arvo's AABB vs Sphere collision algorithm, and I am wondering if there is a similarly noteworthy algorithm for this. One must have condition for this algorithm is that I need to have the option of querying the result for the distance from the ray's origin to the point of collision. having said this, if there is another, faster algorithm which does not return distance, then in addition to posting one that does, also posting that algorithm would be very helpful indeed. Please also state what the function's return argument is, and how you use it to return distance or a 'no-collision' case. For example, does it have an out parameter for the distance as well as a bool return value? or does it simply return a float with the distance, vs a value of -1 for no collision? (For those that don't know: AABB = Axis Aligned Bounding Box)

    Read the article

  • Why Are the Search Algorithms Still So Stupid?

    Anyway, it was back in '96 that I first tried Google (except I think they called it something else back then). It was as if the clouds broke open and the sun shone through. The results were stunningly similar to the object of my query. Prior to Google, you had these directories, the biggest among them Yahoo!

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >