Search Results

Search found 6559 results on 263 pages for 'parallel foreach'.

Page 15/263 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Parallel prologue and epilogue in Grid Engine

    - by ajdecon
    We have a cluster being used to run MPI jobs for a customer. Previously this cluster used Torque as the scheduler, but we are transitioning to Grid Engine 6.2u5 (for some other features). Unfortunately, we are having trouble duplicating some of our maintenance scripts in the Grid Engine environment. In Torque, we have a prologue.parallel script which is used to carry out an automated health-check on the node. If this script returns a fail condition, Torque will helpfully offline the node and re-queue the job to use a different group of nodes. In Grid Engine, however, the queue "prolog" only runs on the head node of the job. We can manually run our prologue script from the startmpi.sh initialization script, for the mpi parallel environment; but I can't figure out how to detect a fail condition and carry out the same "mark offline and requeue" procedure. Any suggestions?

    Read the article

  • Networked Parallel Port in Linux / KVM / QEMU

    - by korkman
    What I want to use is the "-parallel" tcp or udp option from KVM / QEMU, but I don't seem to find any server for this client. I don't serve a printer but a hardware dongle. I checked ser2net, which does provide "/dev/lp0" sharing, but it doesn't seem to work for KVM / QEMU. I suspect KVM / QEMU requires "/dev/parport0". I did rmmod lp, modprobe ppdev, linked ser2net to parport0, but it didn't work out. Perhaps ser2net is not suited for this. I tried socat as well, and I tried netcat. No success. Does anyone know any KVM / QEMU compatible parallel port server? Or did any of netcat, socat or ser2net work for you?

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • How does Task Parallel Library scale on a terminal server or in a web application?

    - by Lasse V. Karlsen
    I understand that the TPL uses work-stealing queues for its tasks when I execute things like Parallel.For and similar constructs. If I understand this correctly, the construct will spin up a number of tasks, where each will start processing items. If one of the tasks complete their allotted items, it will start stealing items from the other tasks which hasn't yet completed theirs. This solves the problem where items 1-100 are cheap to process and items 101-200 are costly, and one of the two tasks would just sit idle until the other completed. (I know this is a simplified exaplanation.) However, how will this scale on a terminal server or in a web application (assuming we use TPL in code that would run in the web app)? Can we risk saturating the CPUs with tasks just because there are N instances of our application running side by side? Is there any information on this topic that I should read? I've yet to find anything in particular, but that doesn't mean there is none.

    Read the article

  • how to generate uncorrelated random numbers in repeated calls in parallel?

    - by user1446948
    I want to write a function which will be repeatedly called by other functions many times. Inside this function it is supposed to generate a lot of random numbers and this part will be treated in parallel. If only for one run, the seed can be chosen differently for each thread, so that the random numbers will be uncorrelated. However, if this function will be called the 2nd time, it seems that the random numbers will repeat unless the seed will be again changed during the later calls. So my question is, is there a good way to generate the random numbers or reset the seed so that the random numbers generated by repeated calls to this function and also by different threads are really random? Thank you.

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Why is PLINQ slower than LINQ for this code?

    - by Rob Packwood
    First off, I am running this on a dual core 2.66Ghz processor machine. I am not sure if I have the .AsParallel() call in the correct spot. I tried it directly on the range variable too and that was still slower. I don't understand why... Here are my results: Process non-parallel 1000 took 146 milliseconds Process parallel 1000 took 156 milliseconds Process non-parallel 5000 took 5187 milliseconds Process parallel 5000 took 5300 milliseconds using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; namespace DemoConsoleApp { internal class Program { private static void Main() { ReportOnTimedProcess( () => GetIntegerCombinations(), "non-parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(runAsParallel: true), "parallel 1000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000), "non-parallel 5000"); ReportOnTimedProcess( () => GetIntegerCombinations(5000, true), "parallel 5000"); Console.Read(); } private static List<Tuple<int, int>> GetIntegerCombinations( int iterationCount = 1000, bool runAsParallel = false) { IEnumerable<int> range = Enumerable.Range(1, iterationCount); IEnumerable<Tuple<int, int>> integerCombinations = from x in range from y in range select new Tuple<int, int>(x, y); return runAsParallel ? integerCombinations.AsParallel().ToList() : integerCombinations.ToList(); } private static void ReportOnTimedProcess( Action process, string processName) { var stopwatch = new Stopwatch(); stopwatch.Start(); process(); stopwatch.Stop(); Console.WriteLine("Process {0} took {1} milliseconds", processName, stopwatch.ElapsedMilliseconds); } } }

    Read the article

  • Use Extension method to write cleaner code

    - by Fredrik N
    This blog post will show you step by step to refactoring some code to be more readable (at least what I think). Patrik Löwnedahl gave me some of the ideas when we where talking about making code much cleaner. The following is an simple application that will have a list of movies (Normal and Transfer). The task of the application is to calculate the total sum of each movie and also display the price of each movie. class Program { enum MovieType { Normal, Transfer } static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } else if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } } private static IEnumerable<MovieType> GetMovies() { return new List<MovieType>() { MovieType.Normal, MovieType.Transfer, MovieType.Normal }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In the code above I’m using an enum, a good way to add types (isn’t it ;)). I also use one foreach loop to calculate the price, the loop has a condition statement to check what kind of movie is added to the list of movies. I want to reuse the foreach only to increase performance and let it do two things (isn’t that smart of me?! ;)). First of all I can admit, I’m not a big fan of enum. Enum often results in ugly condition statements and can be hard to maintain (if a new type is added we need to check all the code in our app to see if we use the enum somewhere else). I don’t often care about pre-optimizations when it comes to write code (of course I have performance in mind). I rather prefer to use two foreach to let them do one things instead of two. So based on what I don’t like and Martin Fowler’s Refactoring catalog, I’m going to refactoring this code to what I will call a more elegant and cleaner code. First of all I’m going to use Split Loop to make sure the foreach will do one thing not two, it will results in two foreach (Don’t care about performance here, if the results will results in bad performance, you can refactoring later, but computers are so fast to day, so iterating through a list is not often so time consuming.) Note: The foreach actually do four things, will come to is later. var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies) { if (movie == MovieType.Normal) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } } foreach (var movie in movies) { if (movie == MovieType.Transfer) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To remove the condition statement we can use the Where extension method added to the IEnumerable<T> and is located in the System.Linq namespace: foreach (var movie in movies.Where( m => m == MovieType.Normal)) { totalPriceOfNormalMovie += 2; Console.WriteLine("$2"); } foreach (var movie in movies.Where( m => m == MovieType.Transfer)) { totalPriceOfTransferMovie += 3; Console.WriteLine("$3"); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will still do two things, calculate the total price, and display the price of the movie. I will not take care of it at the moment, instead I will focus on the enum and try to remove them. One way to remove enum is by using the Replace Conditional with Polymorphism. So I will create two classes, one base class called Movie, and one called MovieTransfer. The Movie class will have a property called Price, the Movie will now hold the price:   public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The following code has no enum and will use the new Movie classes instead: class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = 0; int totalPriceOfTransferMovie = 0; foreach (var movie in movies.Where( m => m is Movie)) { totalPriceOfNormalMovie += movie.Price; Console.WriteLine(movie.Price); } foreach (var movie in movies.Where( m => m is MovieTransfer)) { totalPriceOfTransferMovie += movie.Price; Console.WriteLine(movie.Price); } } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If you take a look at the foreach now, you can see it still actually do two things, calculate the price and display the price. We can do some more refactoring here by using the Sum extension method to calculate the total price of the movies:   static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfNormalMovie = movies.Where(m => m is Movie) .Sum(m => m.Price); int totalPriceOfTransferMovie = movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); foreach (var movie in movies.Where( m => m is Movie)) Console.WriteLine(movie.Price); foreach (var movie in movies.Where( m => m is MovieTransfer)) Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now when the Movie object will hold the price, there is no need to use two separate foreach to display the price of the movies in the list, so we can use only one instead: foreach (var movie in movies) Console.WriteLine(movie.Price); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If we want to increase the Maintainability index we can use the Extract Method to move the Sum of the prices into two separate methods. The name of the method will explain what we are doing: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); foreach (var movie in movies) Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now to the last thing, I love the ForEach method of the List<T>, but the IEnumerable<T> doesn’t have it, so I created my own ForEach extension, here is the code of the ForEach extension method: public static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I will now replace the foreach by using this ForEach method: static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(m => Console.WriteLine(m.Price)); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ForEach on the movies will now display the price of the movie, but maybe we want to display the name of the movie etc, so we can use Extract Method by moving the lamdba expression into a method instead, and let the method explains what we are displaying: movies.ForEach(DisplayMovieInfo); private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the refactoring is done! Here is the complete code:   class Program { static void Main(string[] args) { var movies = GetMovies(); int totalPriceOfMovie = TotalPriceOfMovie(movies); int totalPriceOfTransferMovie = TotalPriceOfMovieTransfer(movies); movies.ForEach(DisplayMovieInfo); } private static void DisplayMovieInfo(Movie movie) { Console.WriteLine(movie.Price); } private static int TotalPriceOfMovieTransfer(IEnumerable<Movie> movies) { return movies.Where(m => m is MovieTransfer) .Sum(m => m.Price); } private static int TotalPriceOfMovie(IEnumerable<Movie> movies) { return movies.Where(m => m is Movie) .Sum(m => m.Price); } private static IEnumerable<Movie> GetMovies() { return new List<Movie>() { new Movie(), new MovieTransfer(), new Movie() }; } } public class Movie { public virtual int Price { get { return 2; } } } public class MovieTransfer : Movie { public override int Price { get { return 3; } } } pulbic static class LoopExtensions { public static void ForEach<T>(this IEnumerable<T> values, Action<T> action) { Contract.Requires(values != null); Contract.Requires(action != null); foreach (var v in values) action(v); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I think the new code is much cleaner than the first one, and I love the ForEach extension on the IEnumerable<T>, I can use it for different kind of things, for example: movies.Where(m => m is Movie) .ForEach(DoSomething); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } By using the Where and ForEach extension method, some if statements can be removed and will make the code much cleaner. But the beauty is in the eye of the beholder. What would you have done different, what do you think will make the first example in the blog post look much cleaner than my results, comments are welcome! If you want to know when I will publish a new blog post, you can follow me on twitter: http://www.twitter.com/fredrikn

    Read the article

  • Ant: foreach loop

    - by user305801
    I'm trying to use the foreach loop in an Ant script but I get the message: Problem: failed to create task or type foreach Cause: The name is undefined. I don't understand why this doesn't work. It is not a 3rd party library. It is a standard task that would be part of the latest version of Ant (1.8). <target name="parse"> <echo message="The first five letters of the alphabet are:"/> <foreach param="instance" list="a,b,c,d,e"> </foreach> </target>

    Read the article

  • How to use a dynamic smarty variable in foreach loop

    - by P Kumar
    Hi, Can anyone tell me how to use dynamic variables in smarty foreach loop. I am trying to create a module in prestashop and m very close to get it done. here's my code: //file name index.php foreach($subCategories as $s) { $foo = intval($s['id_category']); $k = new Category($foo); $var1 = "subSubCategories.$foo"; $var1 = $k-getSubCategories(1); $smarty-assign(array('foo'.$foo = $var1)); } //file name:index.tpl {assign var=foo value=$foo$cat} //where $cat is a variable that counts the number of categories {if isset($foo) AND $foo} {foreach from=$foo item=subCategories name=homesubCategories} <p>{$subCategories.name}</p> {/foreach} {else} <p>{l s='test failed'}</p> {/if} I've exhausted all of my resources and knowledge and feeling quite helpless at this moment. so plz help me out.

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Smarty - foreach loop 4 times and create a new list

    - by Harold
    I want to achieve a list like this with smarty. <ul> <li> <a>img1</a> <a>img2</a> <a>img3</a> <a>img4</a> </li> <li> <a>img5</a> <a>img6</a> <a>img7</a> <a>img8</a> </li> <li> <a>img9</a> <a>img10</a> <a>img11</a> <a>img12</a> </li> </ul> Using this sample code <ul class="bullet"> {foreach from=$manufacturers item=manufacturer name=manufacturer_list} {if $smarty.foreach.manufacturer_list.index < 4} <li class="{if $smarty.foreach.manufacturer_list.last}last_item{elseif $smarty.foreach.manufacturer_list.first}first_item{else}item{/if}"> <a href="{$link->getmanufacturerLink($manufacturer.id_manufacturer, $manufacturer.link_rewrite)}" title="{l s='More about' mod='blockmanufacturer'}{$manufacturer.name}"> <img src="{$img_manu_dir}{$manufacturer.id_manufacturer}.jpg"><span>{$manufacturer.name}<span></a> </li> {/if} {/foreach} First with given array $manufacturers it will loop inside a <li> max of 4 times and will create 4 <img>. And then, when it reach the 4th index, it will make a new <li> tag. Thanks for the help!

    Read the article

  • Foreach/For loop alternative lambda function?

    - by mamu
    We use for or foreach to loop through collections and process each entries. Is there any alternative in all those new lambda functions for collections in C#? Traditional way of doing foreach(var v in vs) { Console.write(v); } Is there anything like? vs.foreach(v => console.write(v))

    Read the article

  • Parallel shell loops

    - by brubelsabs
    Hi, I want to process many files and since I've here a bunch of cores I want to do it in parallel: for i in *.myfiles; do do_something $i `derived_params $i` other_params; done I know of a Makefile solution but my commands needs the arguments out of the shell globbing list. What I found is: > function pwait() { > while [ $(jobs -p | wc -l) -ge $1 ]; do > sleep 1 > done > } > To use it, all one has to do is put & after the jobs and a pwait call, the parameter gives the number of parallel processes: > for i in *; do > do_something $i & > pwait 10 > done But this doesn't work very well, e.g. I tried it with e.g. a for loop converting many files but giving me error and left jobs undone. I can't belive that this isn't done yet since the discussion on zsh mailing list is so old by now. So do you know any better?

    Read the article

  • IndexOutofRangeException while using WriteLine in nested Parallel.For loops

    - by Umar Asif
    I am trying to write kinect depth data to a text file using nested Parallel.For loops with the following code. However, it gives IndexOutofRangeException. The code works perfect if using simple for loops but it hangs the UI since the depth format is set to 640x480 causing the loops to write 307200 lines in the text file at 30fps. Therefore, I switched to Parallel. For scheme. If I omit the writeLine command from the nested loops, the code works fine, which indicates that the IndexOutofRangeException is arising at the writeline command. I do not know how to troubleshoot this. Please advise. Any better workarounds to avoid UI freezing? Thanks. using (DepthImageFrame depthImageframe = d.OpenDepthImageFrame()) { if (depthImageframe == null) return; depthImageframe.CopyPixelDataTo(depthPixelData); swDepth = new StreamWriter(@"E:\depthData.txt", false); int i = 0; Parallel.For(0, depthImageframe.Width, delegate(int x) { Parallel.For(0, depthImageframe.Height, delegate(int y) { p[i] = sensor.MapDepthToSkeletonPoint(depthImageframe.Format, x, y, depthPixelData[x + depthImageframe.Width * y]); swDepth.WriteLine(i + "," + p[k].X + "," + p[k].Y + "," + p[k].Z); i++; }); }); swDepth.Close(); } }

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • While using ConcurrentQueue, trying to dequeue while looping through in parallel

    - by James Black
    I am using the parallel data structures in my .NET 4 application and I have a ConcurrentQueue that gets added to while I am processing through it. I want to do something like: personqueue.AsParallel().WithDegreeOfParallelism(20).ForAll(i => ... ); as I make database calls to save the data, so I am limiting the number of concurrent threads. But, I expect that the ForAll isn't going to dequeue, and I am concerned about just doing ForAll(i => { personqueue.personqueue.TryDequeue(...); ... }); as there is no guarantee that I am popping off the correct one. So, how can I iterate through the collection and dequeue, in a parallel fashion. Or, would it be better to use PLINQ to do this processing, in parallel?

    Read the article

  • Rewinding or resetting Parallel effect in Flex 3

    - by errata
    How can I 'rewind' or force the parallel effect to play from the very beginning once it already started to play? Code sample: <mx:Parallel id="parallelEffect" repeatCount="0"> <mx:Fade alphaTo="1" target="{someTarget}" startDelay="2000" /> <mx:Fade alphaTo="1" target="{someOtherTarget}" startDelay="4000" /> <mx:Fade alphaTo="1" target="{thirdTarget}" startDelay="6000" /> <mx:Fade alphaTo="1" target="{fourthTarget}" startDelay="8000" /> <mx:Fade alphaTo="1" target="{fifthTarget}" startDelay="10000" /> </mx:Parallel>

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • How to best do foreach together with count in excel

    - by user3682637
    I have been trying to do some work in excel, but i seem to be stuck on one point in colum "A" i have: a, b, c, d, e in colum "B" i have: done, started, completed in colum "C" to colum "S" i have: some "X"'s but not in all fields. So my question is how do i do the following foreach row in excel.A Where Bx is done count("X", $row) I have tried pivot, countif, sumproduct but i cant seem to get it to work, any ideas?

    Read the article

  • How to setup matlab for parallel processing on Amazon EC2?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine. On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors. Obviously the difference is that I have my 2 cores on a single processor, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances, which is not my problem. Any help appreciated!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >