Search Results

Search found 6559 results on 263 pages for 'parallel foreach'.

Page 12/263 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • C#: LINQ vs foreach - Round 1.

    - by James Michael Hare
    So I was reading Peter Kellner's blog entry on Resharper 5.0 and its LINQ refactoring and thought that was very cool.  But that raised a point I had always been curious about in my head -- which is a better choice: manual foreach loops or LINQ?    The answer is not really clear-cut.  There are two sides to any code cost arguments: performance and maintainability.  The first of these is obvious and quantifiable.  Given any two pieces of code that perform the same function, you can run them side-by-side and see which piece of code performs better.   Unfortunately, this is not always a good measure.  Well written assembly language outperforms well written C++ code, but you lose a lot in maintainability which creates a big techncial debt load that is hard to offset as the application ages.  In contrast, higher level constructs make the code more brief and easier to understand, hence reducing technical cost.   Now, obviously in this case we're not talking two separate languages, we're comparing doing something manually in the language versus using a higher-order set of IEnumerable extensions that are in the System.Linq library.   Well, before we discuss any further, let's look at some sample code and the numbers.  First, let's take a look at the for loop and the LINQ expression.  This is just a simple find comparison:       // find implemented via LINQ     public static bool FindViaLinq(IEnumerable<int> list, int target)     {         return list.Any(item => item == target);     }         // find implemented via standard iteration     public static bool FindViaIteration(IEnumerable<int> list, int target)     {         foreach (var i in list)         {             if (i == target)             {                 return true;             }         }           return false;     }   Okay, looking at this from a maintainability point of view, the Linq expression is definitely more concise (8 lines down to 1) and is very readable in intention.  You don't have to actually analyze the behavior of the loop to determine what it's doing.   So let's take a look at performance metrics from 100,000 iterations of these methods on a List<int> of varying sizes filled with random data.  For this test, we fill a target array with 100,000 random integers and then run the exact same pseudo-random targets through both searches.                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     Any         10       26          0.00046             30.00%     Iteration   10       20          0.00023             -     Any         100      116         0.00201             18.37%     Iteration   100      98          0.00118             -     Any         1000     1058        0.01853             16.78%     Iteration   1000     906         0.01155             -     Any         10,000   10,383      0.18189             17.41%     Iteration   10,000   8843        0.11362             -     Any         100,000  104,004     1.8297              18.27%     Iteration   100,000  87,941      1.13163             -   The LINQ expression is running about 17% slower for average size collections and worse for smaller collections.  Presumably, this is due to the overhead of the state machine used to track the iterators for the yield returns in the LINQ expressions, which seems about right in a tight loop such as this.   So what about other LINQ expressions?  After all, Any() is one of the more trivial ones.  I decided to try the TakeWhile() algorithm using a Count() to get the position stopped like the sample Pete was using in his blog that Resharper refactored for him into LINQ:       // Linq form     public static int GetTargetPosition1(IEnumerable<int> list, int target)     {         return list.TakeWhile(item => item != target).Count();     }       // traditionally iterative form     public static int GetTargetPosition2(IEnumerable<int> list, int target)     {         int count = 0;           foreach (var i in list)         {             if(i == target)             {                 break;             }               ++count;         }           return count;     }   Once again, the LINQ expression is much shorter, easier to read, and should be easier to maintain over time, reducing the cost of technical debt.  So I ran these through the same test data:                       List<T> On 100,000 Iterations     Method      Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile   10       41          0.00041             128%     Iteration   10       18          0.00018             -     TakeWhile   100      171         0.00171             88%     Iteration   100      91          0.00091             -     TakeWhile   1000     1604        0.01604             94%     Iteration   1000     825         0.00825             -     TakeWhile   10,000   15765       0.15765             92%     Iteration   10,000   8204        0.08204             -     TakeWhile   100,000  156950      1.5695              92%     Iteration   100,000  81635       0.81635             -     Wow!  I expected some overhead due to the state machines iterators produce, but 90% slower?  That seems a little heavy to me.  So then I thought, well, what if TakeWhile() is not the right tool for the job?  The problem is TakeWhile returns each item for processing using yield return, whereas our for-loop really doesn't care about the item beyond using it as a stop condition to evaluate. So what if that back and forth with the iterator state machine is the problem?  Well, we can quickly create an (albeit ugly) lambda that uses the Any() along with a count in a closure (if a LINQ guru knows a better way PLEASE let me know!), after all , this is more consistent with what we're trying to do, we're trying to find the first occurence of an item and halt once we find it, we just happen to be counting on the way.  This mostly matches Any().       // a new method that uses linq but evaluates the count in a closure.     public static int TakeWhileViaLinq2(IEnumerable<int> list, int target)     {         int count = 0;         list.Any(item =>             {                 if(item == target)                 {                     return true;                 }                   ++count;                 return false;             });         return count;     }     Now how does this one compare?                         List<T> On 100,000 Iterations     Method         Size     Total (ms)  Per Iteration (ms)  % Slower     TakeWhile      10       41          0.00041             128%     Any w/Closure  10       23          0.00023             28%     Iteration      10       18          0.00018             -     TakeWhile      100      171         0.00171             88%     Any w/Closure  100      116         0.00116             27%     Iteration      100      91          0.00091             -     TakeWhile      1000     1604        0.01604             94%     Any w/Closure  1000     1101        0.01101             33%     Iteration      1000     825         0.00825             -     TakeWhile      10,000   15765       0.15765             92%     Any w/Closure  10,000   10802       0.10802             32%     Iteration      10,000   8204        0.08204             -     TakeWhile      100,000  156950      1.5695              92%     Any w/Closure  100,000  108378      1.08378             33%     Iteration      100,000  81635       0.81635             -     Much better!  It seems that the overhead of TakeAny() returning each item and updating the state in the state machine is drastically reduced by using Any() since Any() iterates forward until it finds the value we're looking for -- for the task we're attempting to do.   So the lesson there is, make sure when you use a LINQ expression you're choosing the best expression for the job, because if you're doing more work than you really need, you'll have a slower algorithm.  But this is true of any choice of algorithm or collection in general.     Even with the Any() with the count in the closure it is still about 30% slower, but let's consider that angle carefully.  For a list of 100,000 items, it was the difference between 1.01 ms and 0.82 ms roughly in a List<T>.  That's really not that bad at all in the grand scheme of things.  Even running at 90% slower with TakeWhile(), for the vast majority of my projects, an extra millisecond to save potential errors in the long term and improve maintainability is a small price to pay.  And if your typical list is 1000 items or less we're talking only microseconds worth of difference.   It's like they say: 90% of your performance bottlenecks are in 2% of your code, so over-optimizing almost never pays off.  So personally, I'll take the LINQ expression wherever I can because they will be easier to read and maintain (thus reducing technical debt) and I can rely on Microsoft's development to have coded and unit tested those algorithm fully for me instead of relying on a developer to code the loop logic correctly.   If something's 90% slower, yes, it's worth keeping in mind, but it's really not until you start get magnitudes-of-order slower (10x, 100x, 1000x) that alarm bells should really go off.  And if I ever do need that last millisecond of performance?  Well then I'll optimize JUST THAT problem spot.  To me it's worth it for the readability, speed-to-market, and maintainability.

    Read the article

  • How can I create a horizontal table in a single foreach loop in MVC?

    - by GenericTypeTea
    Is there any way, in ASP.Net MVC, to condense the following code to a single foreach loop? <table class="table"> <tr> <td> Name </td> <% foreach (var item in Model) { %> <td> <%= item.Name %> </td> <% } %> </tr> <tr> <td> Item </td> <% foreach (var item in Model) { %> <td> <%= item.Company %> </td> <% } %> </tr> </table> Where model is a simple object: public class SomeObject { public virtual Name {get;set;} public virtual Company {get;set;} } This would output a table as follows: Name | Bob | Sam | Bill | Steve | Company | Builder | Fireman | MS | Apple | I know I could probably use an extension method to write out each row, but is it possible to build all rows using a single iteration over the model? This is a follow on from this question as I'm unhappy with my accepted answer and cannot believe I've provided the best solution.

    Read the article

  • In ParallelPython, a method of an object ( object.func() ) fails to manipulate a variable of an object ( object.value )

    - by mehmet.ali.anil
    With parallelpython, I am trying to convert my old serial code to parallel, which heavily relies on objects that have methods that change that object's variables. A stripped example in which I omit the syntax in favor of simplicity: class Network: self.adjacency_matrix = [ ... ] self.state = [ ... ] self.equilibria = [ ... ] ... def populate_equilibria(self): # this function takes every possible value that self.state can be in # runs the boolean dynamical system # and writes an integer within self.equilibria for each self.state # doesn't return anything I call this method as: Code: j1 = jobserver.submit(net2.populate_equilibria,(),(),("numpy as num")) The job is sumbitted, and I know that a long computation takes place, so I speculate that my code is ran. The problem is, i am new to parallelpython , I was expecting that, when the method is called, the variable net2.equilibria would be written accordingly, and I would get a revised object (net2) . That is how my code works, independent objects with methods that act upon the object's variables. Rather, though the computation is apparent, and reasonably timed, the variable net2.equilibria remains unchanged. As if PP only takes the function and the object, computes it elsewhere, but never returns the object, so I am left with the old one. What do I miss? Thanks in advance.

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

  • Parallel Computing Platform Developer Lab

    This is an exciting announcement that I must share: "Microsoft Developer & Platform Evangelism, in collaboration with the Microsoft Parallel Computing Platform product team, is hosting a developer lab at the Platform Adoption Center on April 12-15, 2010.  This event is for Microsoft Partners and Customers seeking to incorporate either .NET Framework 4 or Visual C++ 2010 parallelism features into their new or existing applications, and to gain expertise with new Visual Studio 2010 tools...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • White Paper on Parallel Processing

    - by Andrew Kelly
      I just ran across what I think is a newly published white paper on parallel  processing in SQL Server 2008 R2. The date is October 2010 but this is the first time I have seen it so I am not 100% sure how new it really is. Maybe you have seen it already but if not I recommend taking a look. So far I haven’t had time to read it extensively but from a cursory look it appears to be quite informative and this is one of the areas least understood by a lot of dba’s. It was authored by Don Pinto...(read more)

    Read the article

  • BPM Parallel Multi Instance sub processes by Niall Commiskey

    - by JuergenKress
    Here is a very simple scenario: An order with lines is processed. The OrderProcess accepts in an order with its attendant lines. The Fulfillment process is called for each order line. We do not have many order lines, and the processing is simple, so we run this in parallel. Let's look at the definition of the Multi Instance sub-process - Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM,Niall Commiskey,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Looking for parallel programming problem

    - by Chris Lieb
    I am trying to come up with a problem that is easily solvable in a parallel manner and that requires communication between threads for a test. I also am trying to avoid problems that require require random waits, which rules out dining philosophers and producer-consumer (bounded buffer), two of the classics. My goal is for the student to be able to write the program in less than 20-30 minutes in front of a computer not knowing of the problem beforehand. (This is to prevent preparation more than to come up with something novel.) I am trying to stress the communication aspect of the program, though the multi-threaded nature is also important. Does anyone have some ideas? Edit: I'm using Google Go for the language and testing comprehension of the goroutines/channels combo vs an actors library that I authored.

    Read the article

  • Optimum Number of Parallel Processes

    - by System Down
    I just finished coding a (basic) ray tracer in C# for fun and for the learning experience. Now I want to further that learning experience. It seems to me that ray tracing is a prime candidate for parallel processing, which is something I have very little experience in. My question is this: how do I know the optimum number of concurrent processes to run? My first instinct tells me: it depends on how many cores my processor has, but like I said I'm new to this and I may be neglecting something.

    Read the article

  • Parallel Threading in Multi-Language Software?

    - by Smarty Twiti
    I'm developing a software that contain many modules/Daemon running in parallel manner, what i'm looking for is how to implement that, i cannot use Thread because some of those modules/Daemon are perhaps implemented in other languages (C,java,C#...). For example I'm using C for Hooking Messages exchanged between Windows kernel and top level applications, Java/C# to use some free library to simply parse XML(for example) or to accept and execute commands over the network..this can be done by C Language but just to improve productivity... Finally for GUI I'm using Ultimate++ (c++) that is like the main process that call and monitor(activate/deactivate/get state) of all other modules/Daemon through an interface. I admit that the development of each module/Daemon in a separate language greatly facilitates maintenance, but especially I am obliged to do that.. What is the best practice way to do that ? All helps will be appreciated.

    Read the article

  • Why doesn't my Perl code work when I put it in a foreach loop?

    - by foxhop
    This code outputs the scalars in the row array properly: $line = "This is my favorite test"; @row = split(/ /, $line); print $row[0]; print $row[1]; The same code inside a foreach loop doesn't print any scalar values: foreach $line (@lines){ @row = split(/ /, $line); print $row[0]; print $row[1]; } What could cause this to happen? I am new to Perl coming from python. I need to learn Perl for my new position.

    Read the article

  • Does lamda in List.ForEach leads to memory leaks and performance problems ?

    - by Monomachus
    I have a problem which I could solve using something like this sortedElements.ForEach((XElement el) => PrintXElementName(el,i++)); And this means that I have in ForEach a lambda which permits using parameters like int i. I like that way of doing it, but i read somewhere that anonymous methods and delegates with lambda leads to a lot of memory leaks because each time when lambda is executed something is instantiated but is not released. Something like that. Could you please tell me if this is true in this situation and if it is why?

    Read the article

  • Perl, Array called by scalar doesn't work in foreach loop.

    - by foxhop
    This code outputs the scalars in the row array properly: $line = "This is my favorite test"; @row = split(/ /, $line); print $row[0]; print $row[1]; The same code inside a foreach loop doesn't print any scalar values: foreach $line (@lines){ @row = split(/ /, $line); print $row[0]; print $row[1]; } What could cause this to happen? I am new to perl coming from python. I need to learn perl for my new position.

    Read the article

  • How to Create an Array from a Single Variable and Form Multiple Foreach Loops?

    - by matphoto
    I'm fairly proficient in HTML/CSS, but very new when it comes to the likes of PHP and Javascript. I've jumped headfirst into coding Wordpress shortcodes (basically php functions), and so far, through trial and error and seemingly endless browser refreshes, I've been able to figure everything out. I just hit a huge wall though, hence why I'm here. Basically, I'm trying to give an attribute a list of values like: attr="23, 95, 136, ect" The function then needs to take that attribute variable and create an array with it: $arr = array($attr); To me that seems as if it would work, but the array takes the whole list as one value instead. After doing that I want to create a foreach loop that parses each number from the list, possibly through yet another foreach loop if possible, and returns a section of code for each one, and I'm not quite sure how to pull that off either. Any feedback would be greatly appreciated.

    Read the article

  • Consecutive verse Parallel Nunit Testing

    - by Jacobm001
    My office has roughly ~300 webpages that should be tested on a fairly regular basis. I'm working with Nunit, Selenium, and C# in Visual Studio 2010. I used this framework as a basis, and I do have a few working tests. The problem I'm running into is is that when I run the entire suite. In each run, a random test(s) will fail. If they're run individually, they will all pass. My guess is that Nunit is trying to run all 7 tests at the same time and the browser can't support this for obvious reasons. Watching the browser visually, this does seem to be the case. Looking at the screenshot below, I need to figure out a way in which the tests under Index_Tests are run sequentially, not in parallel. errors: Selenium2.OfficeClass.Tests.Index_Tests.index_4: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} Selenium2.OfficeClass.Tests.Index_Tests.index_7: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} example with one test: using OpenQA.Selenium; using NUnit.Framework; namespace Selenium2.OfficeClass.Tests { [TestFixture] public class Index_Tests : TestBase { public IWebDriver driver; [TestFixtureSetUp] public void TestFixtureSetUp() { driver = StartBrowser(); } [TestFixtureTearDown] public void TestFixtureTearDown() { driver.Quit(); } [Test] public void index_1() { OfficeClass index = new OfficeClass(driver); index.Navigate("http://url_goeshere"); index.SendKeyID("txtFiscalYear", "input"); index.SendKeyID("txtIndex", ""); index.SendKeyID("txtActivity", "input"); index.ClickID("btnDisplay"); } } }

    Read the article

  • C++ Parallel Asynchonous task

    - by Doodlemeat
    I am currently building a randomly generated terrain game where terrain is created automatically around the player. I am experiencing lag when the generated process is active, as I am running quite heavy tasks with post-processing and creating physics bodies. Then I came to mind using a parallel asynchronous task to do the post-processing for me. But I have no idea how I am going to do that. I have searched for C++ std::async but I believe that is not what I want. In the examples I found, a task returned something. I want the task to change objects in the main program. This is what I want: // Main program // Chunks that needs to be processed. // NOTE! These chunks are already generated, but need post-processing only! std::vector<Chunk*> unprocessedChunks; And then my task could look something like this, running like a loop constantly checking if there is chunks to process. // Asynced task if(unprocessedChunks.size() > 0) { processChunk(unprocessedChunks.pop()); } I know it's not far from easy as I wrote it, but it would be a huge help for me if you could push me at the right direction. In Java, I could type something like this: asynced_task = startAsyncTask(new PostProcessTask()); And that task would run until I do this: asynced_task.cancel();

    Read the article

  • Is it possible to distribute STDIN over parallel processes?

    - by Erik
    Given the following example input on STDIN: foo bar bar baz === qux bla === def zzz yyy Is it possible to split it on the delimiter (in this case '===') and feed it over stdin to a command running in parallel? So the example input above would result in 3 parallel processes (for example a command called do.sh) where each instance received a part of the data on STDIN, like this: do.sh (instance 1) receives this over STDIN: foo bar bar baz do.sh (instance 2) receives this over STDIN: qux bla do.sh (instance 3) receives this over STDIN: def zzz yyy I suppose something like this is possible using xargs or GNU parallel, but I do not know how.

    Read the article

  • Where has my parallel port gone? ioperm(888,1,1) returns -1.

    - by marcusw
    I have an old Dell Dimension 8200 running Gentoo which I use solely to control various things using the parallel port. After shutting it down a few weeks ago, I started it up again today and tried to access the parallel port like I usually do. Unfortunately, my code bombed out when it tried to call ioperm(888,1,1) to grab the parallel port which returned an error code of -1. There have been no changes to the system be it hardware or software, no updates, no tweaking, no dropping the case, no over-amping the data pins, nothing. The port and the software have been working fine for months with no changes, and were working fine when I shut it down last. Running my code with root privileges changes nothing. What is breaking this and how can I fix it?

    Read the article

  • Parallel port no longer accessible even though no changes to system.

    - by marcusw
    I have an old Dell Dimension 8200 running Gentoo which I use solely to control various things using the parallel port. After shutting it down a few weeks ago, I started it up again today and tried to access the parallel port like I usually do. Unfortunately, my code bombed out when it tried to call ioperm(888,1,1) to grab the parallel port which returned an error code of -1. There have been no changes to the system be it hardware or software, no updates, no tweaking, no dropping the case, no over-amping the data pins, nothing. The port and the software have been working fine for months with no changes, and were working fine when I shut it down last. Running my code with root privileges changes nothing. What is breaking this and how can I fix it?

    Read the article

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • Is there a more elegant way to act on the first and last items in a foreach enumeration than count++

    - by Edward Tanguay
    Is there a more elegant way to act on the first and last items when iterating through a foreach loop than incrementing a separate counter and checking it each time? For instance, the following code outputs: >>> [line1], [line2], [line3], [line4] <<< which requires knowing when you are acting on the first and last item. Is there a more elegant way to do this now in C# 3 / C# 4? It seems like I could use .Last() or .First() or something like that. using System; using System.Collections.Generic; using System.Text; namespace TestForNext29343 { class Program { static void Main(string[] args) { StringBuilder sb = new StringBuilder(); List<string> lines = new List<string> { "line1", "line2", "line3", "line4" }; int index = 0; foreach (var line in lines) { if (index == 0) sb.Append(">>> "); sb.Append("[" + line + "]"); if (index < lines.Count - 1) sb.Append(", "); else sb.Append(" <<<"); index++; } Console.WriteLine(sb.ToString()); Console.ReadLine(); } } }

    Read the article

  • Foreach is crashing script, but no errors reported.

    - by ILMV
    So I've created this smarty function to get images from my flickr photostream using SimplePie... simple really, or so it should be. The problem I'm having is the foreach will crash the script, this doesn't happen if I put an exit after the closing foreach, of course because of this the rest of my script doesn't execute. The problem also completely subsides if I remove the foreach, I've tested it and it's not the contents of the foreach, but the loop itself. Error reporting is turned on but I don't get any, I also tried messing with the memory_limit, with no luck. Anyone know why this foreach is killing my script? Thanks! function smarty_function_flickr ($params, &$smarty) { require_once('system/library/SimplePie/simplepie.inc'); require_once('system/library/SimplePie/idn/idna_convert.class.php'); $flickr=new flickr(); /** * Set up SimplePie with all default values using shorthand syntax. */ $feed = new SimplePie($params['feed'], 'system/library/SimplePie/cache', '600'); $feed->handle_content_type(); /** * What sizes should we use? * Choices: square, thumb, small, medium, large. */ $thumb = 'square'; $full = 'medium'; $output = array(); $counter=0; // If I comment this foreach out the problem subsides, I know it is not the code within the foreach foreach ($feed->get_items() as $item) { $url = $flickr->image_from_description($item->get_description()); $output[$counter]['title'] = $item->get_title(); $output[$counter]['image'] = $flickr->select_image($url, $full); $output[$counter]['thumb'] = $flickr->select_image($url, $thumb); $counter++; } // Set template variables and template $smarty->assign('flickr',$output); $smarty->display('forms/'.$params['template'].'.tpl'); }

    Read the article

  • C# [Mono]: MPAPI vs MPI.NET vs ?

    - by Olexandr
    Hi. I'm working on college project. I have to develop distributed computing system. And i decided to do some research to make this task fun :) I've found MPAPI and MPI.NET libraries. Yes, they are .NET libraries(Mono, in my case). Why .NET ? I'm choosing between Ada, C++ and C# so to i've choosed C# because of lower development time. I have two goals: Simplicity; Performance; Cluster computing. So, what to choose - MPAPI or MPI.NET or something else ?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >