Search Results

Search found 6559 results on 263 pages for 'parallel foreach'.

Page 28/263 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • how Two elements - IMG - DIV when hover over IMG show/hide the DIV - added with hover hide/show on i

    - by Jan Fosgerau
    Im very new to the wonder that is jquery. and i just figure out how to make my img buttons show/hide with a opacity difference (as such) <script type="text/javascript"> <![CDATA[ $(".ExcommKnap").mouseover(function () { $(this).stop().fadeTo('fast', 0.5, function(){}) }); $(".ExcommKnap").mouseout(function () { $(this).stop().fadeTo('fast', 1.0, function(){}) }); ]]> </script> which is good and all. but i also need to make the button when hovered over show text just above it that is specific to that button. i made these here elements that are looped in a for each. <div style="top:10px; width:755px;text-align:right; position:absolute; "> <div id="Page-{@id}" class="headlinebox"> <xsl:value-of select="@nodeName"/> </div> </div> <a href="{umbraco.library:NiceUrl(@id)}"> <img class="ExcommKnap" src="{$media/data[@alias='umbracoFile']}" /> </a> i need to make the individual text appear when hovered over its button. hence i have the id="page-{@id}" looped out along and need to get this place in the jquery code i presume. so when i hover over a img class="ExcommKnap" it makes the correct text visible. But i need the div id="page-{id}" to be invisible to begin with on pageload and then visible when its button is being hovered over. can anyone help ?

    Read the article

  • Why don't xUnit frameworks allow tests to run in parallel?

    - by Xavier Nodet
    Do you know of any xUnit framework that allows to run tests in parallel, to make use of multiple cores in today's machine? I don't... If none (or so few) of them does it, maybe there is a reason... Is it that tests are usually so quick that people simply don't feel the need to paralellize them? Is there something deeper that precludes distributing (at least some of) the tests over multiple threads? Thanks!

    Read the article

  • Branching strategy for parallel development that won't be in the same release?

    - by Telastyn
    My team is working on a product, which for business reasons needs to be released on a regular schedule. An issue has arisen where we want to do development in parallel for the upcoming release, as well as the 'next' release. This is to become standard practice, so it's not as straightforward as cutting a feature branch for the new work. We'll continually have 2+ teams working on different releases of the same product. Is there an SCM best practice for this sort of arrangement?

    Read the article

  • Relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"?

    - by Sid
    In the context of C#, .NET 4/4.5 used for an application running on a web-server, what is the relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"? I understand one is a library and the other is a pattern. But to dig deeper, is it like "The library is used by the pattern to enforce good practices". I'm also not clear if both are supported in .NET 4.0 (with awake and async keywords) Edit: Seems that awake and async are only in .NET 4.5 ...

    Read the article

  • STL algorithms and concurrent programming

    - by Andrew
    Hello everyone, Can any of STL algorithms/container operations like std::fill, std::transform be executed in parallel if I enable OpenMP for my compiler? I am working with MSVC 2008 at the moment. Or maybe there are other ways to make it concurrent? Thanks.

    Read the article

  • Number of threads and thread numbers in Grand Central Dispatch

    - by raphgott
    I am using C and Grand Central Dispatch to parallelize some heavy computations. How can I get the number of threads used by GCD? Also is it possible to know on which thread a piece of code is currently running on? Basically I'd like to use sprng (parallel random numbers) with multiple streams and for that I need to know what stream id to use (and therefore what thread is being used).

    Read the article

  • The simplest concurrency pattern

    - by Ilya Kogan
    Please, would you help me in reminding me of one of the simplest parallel programming techniques. How do I do the following in C#: Initial state: semaphore counter = 0 Thread 1: // Block until semaphore is signalled semaphore.Wait(); // wait until semaphore counter is 1 Thread 2: // Allow thread 1 to run: semaphore.Signal(); // increments from 0 to 1 It's not a mutex because there is no critical section, or rather you can say there is an infinite critical section. So what is it?

    Read the article

  • Can .NET Task instances go out of scope during run?

    - by Henry Jackson
    If I have the following block of code in a method (using .NET 4 and the Task Parallel Library): var task = new Task(() => DoSomethingLongRunning()); task.Start(); and the method returns, will that task go out of scope and be garbage collected, or will it run to completion? I haven't noticed any issues with GCing, but want to make sure I'm not setting myself up for a race condition with the GC.

    Read the article

  • openmp program elapsed time not scaling with increased threads

    - by Griff
    I've got this openmp fortran program doing an embarrassingly parallel problem - do loop over 512^3 elements. See output below. Why would there be such strange behavior in the elapsed time as a function of threads? I thought it would peak at a sweet spot then slowly degrade. This clearly isn't happening. Perhaps I misunderstand something about openmp. Threads, omp_get_wtime 1, 103.76298500015400 2, 65.346454000100493 4, 45.923643999965861 7, 38.074195000110194 8, 36.968765000114217 9, 39.45981499995105 10,40.753379000118002 12,39.577559999888763 14,37.909950000001118

    Read the article

  • openmp sections running sequentially

    - by chi42
    I have the following code: #pragma omp parallel sections private(x,y,cpsrcptr) firstprivate(srcptr) lastprivate(srcptr) { #pragma omp section { //stuff } #pragma omp section { //stuff } } According to the Zoom profiler, two threads are created, one thread executes both the sections, and the other thread simply blocks! Has anyone encountered anything like this before? (And yes, I do have a dual core machine).

    Read the article

  • ParallelWork: Feature rich multithreaded fluent task execution library for WPF

    - by oazabir
    ParallelWork is an open source free helper class that lets you run multiple work in parallel threads, get success, failure and progress update on the WPF UI thread, wait for work to complete, abort all work (in case of shutdown), queue work to run after certain time, chain parallel work one after another. It’s more convenient than using .NET’s BackgroundWorker because you don’t have to declare one component per work, nor do you need to declare event handlers to receive notification and carry additional data through private variables. You can safely pass objects produced from different thread to the success callback. Moreover, you can wait for work to complete before you do certain operation and you can abort all parallel work while they are in-flight. If you are building highly responsive WPF UI where you have to carry out multiple job in parallel yet want full control over those parallel jobs completion and cancellation, then the ParallelWork library is the right solution for you. I am using the ParallelWork library in my PlantUmlEditor project, which is a free open source UML editor built on WPF. You can see some realistic use of the ParallelWork library there. Moreover, the test project comes with 400 lines of Behavior Driven Development flavored tests, that confirms it really does what it says it does. The source code of the library is part of the “Utilities” project in PlantUmlEditor source code hosted at Google Code. The library comes in two flavors, one is the ParallelWork static class, which has a collection of static methods that you can call. Another is the Start class, which is a fluent wrapper over the ParallelWork class to make it more readable and aesthetically pleasing code. ParallelWork allows you to start work immediately on separate thread or you can queue a work to start after some duration. You can start an immediate work in a new thread using the following methods: void StartNow(Action doWork, Action onComplete) void StartNow(Action doWork, Action onComplete, Action<Exception> failed) For example, ParallelWork.StartNow(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workEndedAt = DateTime.Now; }); Or you can use the fluent way Start.Work: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .Run(); Besides simple execution of work on a parallel thread, you can have the parallel thread produce some object and then pass it to the success callback by using these overloads: void StartNow<T>(Func<T> doWork, Action<T> onComplete) void StartNow<T>(Func<T> doWork, Action<T> onComplete, Action<Exception> fail) For example, ParallelWork.StartNow<Dictionary<string, string>>( () => { test = new Dictionary<string,string>(); test.Add("test", "test"); return test; }, (result) => { Assert.True(result.ContainsKey("test")); }); Or, the fluent way: Start<Dictionary<string, string>>.Work(() => { test = new Dictionary<string, string>(); test.Add("test", "test"); return test; }) .OnComplete((result) => { Assert.True(result.ContainsKey("test")); }) .Run(); You can also start a work to happen after some time using these methods: DispatcherTimer StartAfter(Action onComplete, TimeSpan duration) DispatcherTimer StartAfter(Action doWork,Action onComplete,TimeSpan duration) You can use this to perform some timed operation on the UI thread, as well as perform some operation in separate thread after some time. ParallelWork.StartAfter( () => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }, () => { workCompletedAt = DateTime.Now; }, waitDuration); Or, the fluent way: Start.Work(() => { workStartedAt = DateTime.Now; Thread.Sleep(howLongWorkTakes); }) .OnComplete(() => { workCompletedAt = DateTime.Now; }) .RunAfter(waitDuration);   There are several overloads of these functions to have a exception callback for handling exceptions or get progress update from background thread while work is in progress. For example, I use it in my PlantUmlEditor to perform background update of the application. // Check if there's a newer version of the app Start<bool>.Work(() => { return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl); }) .OnComplete((hasUpdate) => { if (hasUpdate) { if (MessageBox.Show(Window.GetWindow(me), "There's a newer version available. Do you want to download and install?", "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time, it will try downloading again.", "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); }); The above code shows you how to get exception callbacks on the UI thread so that you can take necessary actions on the UI. Moreover, it shows how you can chain two parallel works to happen one after another. Sometimes you want to do some parallel work when user does some activity on the UI. For example, you might want to save file in an editor while user is typing every 10 second. In such case, you need to make sure you don’t start another parallel work every 10 seconds while a work is already queued. You need to make sure you start a new work only when there’s no other background work going on. Here’s how you can do it: private void ContentEditor_TextChanged(object sender, EventArgs e) { if (!ParallelWork.IsAnyWorkRunning()) { ParallelWork.StartAfter(SaveAndRefreshDiagram, TimeSpan.FromSeconds(10)); } } If you want to shutdown your application and want to make sure no parallel work is going on, then you can call the StopAll() method. ParallelWork.StopAll(); If you want to wait for parallel works to complete without a timeout, then you can call the WaitForAllWork(TimeSpan timeout). It will block the current thread until the all parallel work completes or the timeout period elapses. result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1)); The result is true, if all parallel work completed. If it’s false, then the timeout period elapsed and all parallel work did not complete. For details how this library is built and how it works, please read the following codeproject article: ParallelWork: Feature rich multithreaded fluent task execution library for WPF http://www.codeproject.com/KB/WPF/parallelwork.aspx If you like the article, please vote for me.

    Read the article

  • SQL SERVER – #TechEdIn – Presenting Tomorrow on Speed Up! – Parallel Processes and Unparalleled Performance at TechEd India 2012

    - by pinaldave
    Performance tuning is always a very hot topic when it is about SQL Server. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. However, in India, it’s actually the very first time someone is presenting on this interesting subject, so this time I had the biggest challenge to present this session. Frequently enough, we get these two kind of questions: How to turn off parallelism as it is reducing performance? How to turn on parallelism as I want more performance? The reality is that not everyone knows what exactly is needed by their system. In this session, I have attempted to answer this very question. I’ve decided to provide a balanced view but stay away from theory, which leads us to say “It depends”. The session will have a clear message about this towards its end. Deck Details Slides: 45+ Demos: 7+ Bonus Quiz: 5 Images: 10+ Session delivery time: 52 Mins + 8 Mins of Q & A I have presented this session a couple of times to my friends and so far have received good feedback. Oftentimes, when people hear that I am going to present 45 slides, they all say it is too much to cover. However, when I am done with the session the usual reaction is that I truly gave justice to those slides. Action Item Here are a few of the action items for all of those who are going to attend this session: If you want to attend the session, just come early. There’s a good chance that you may not get a seat because right before me, there is a session from SQL Guru Vinod Kumar. He performs a powerful delivery of million concepts in just a little time. Quiz. I will be asking few questions during the session as well as before the session starts. If you get the correct answer, I will give unique learning material for you. You may not want to miss this learning opportunity at any cosst. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU, More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get a maximum performance out of any query, one has to master various aspects of the parallel processes. In this deep-dive session, we will explore this complex subject with a very simple interactive demo. Attendees will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • PBS batch jobs - the qalter command

    - by Ryan Budney
    I've got a giant computation running on a Scientific Linux cluster. At present I have over 600 jobs parked in the queue, waiting for processor time, while a few are running. I'm trying to use the qalter command on some of the idle but scheduled jobs. I'd like to schedule them for a later time, so that other users can jump part of the queue, sort of as an act of politeness. Is this doable? For example, JOBNAME 292399 is currently idle, scheduled to be run whenever a spot in the queue opens up. But if I run qalter -a 10051000 292398 followed by qrerun 292398 I get qrerun: Request invalid for state of job 292398.euler. From the qalter documentation, I thought 10051000 refers to tomorrow (oct 5th, 10am) but perhaps I'm misunderstanding something? If I'm going about this the wrong way, please let me know. The main thing I'm looking for is a command that's easily scriptable, so that I can modify when my queued tasks get run. qalter seems good for those purposes if I can get it working. I'd rather avoid running qdel and re qsubbing the computations, as there's a bookkeeping issue on which tasks to restart (vs which ones not to). I want to avoid that kind of bookkeeping. From googling around I notice some qalter commands have rather different date formats, but the above appears to be correct, as far as I can tell from the man docs. Any help would be appreciated.

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • netcat as a multithread server

    - by etuardu
    Hello, I use netcat to run a simple server like this: while true; do nc -l -p 2468 -e ./my_exe; done This way, anyone is able to connect to my host on port 2468 and talk with "my_exe". Unfortunately, if someone else wants to connect during an open session, it would get a "Connection refused" error, because netcat is no longer in listening until the next "while" loop. Is there a way to make netcat behave like a multithread server, i.e. always in listening for incoming connections? If not, are there some workarounds for this? Thank you all!

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • What kind of CPU/GPU integration is offered by APUs?

    - by clabacchio
    I'm truly fascinated by the idea of GPGPU and using the GPU for heavy processing. I'm seeing that also APUs (Accelerated Processing Units, CPU+GPU on the same chip) are gaining a consistent popularity. Are all of the APUs using a GPGPU? Can it be used for processing? And is it seamless or it requires special code (like Cuda) to have the hard work made by the GPU? I'm not interested in bare graphic performance, but more about how much the GPU can accelerate the "normal" CPU work.

    Read the article

  • Request Multiple Maya Floating Server Licenses for extra Satellite clients

    - by Rob
    Hello all: I am currently setting up a 'render farm' for Maya 2008 Unlimited. One Maya workstation license comes with the ability to render on eight satellite nodes. It works perfect, the remote rendering works like a charm. However, we have additional boxes to set up as satellite rendering nodes, and we have extra Maya workstation licenses. Ideally, the workstation can take two licenses and thus render on 16 nodes, but I haven't been able to figure it out, or determine if it is actually possible. It's a big project, where rendering the entire thing is in the scope of weeks, so the speed up would be worth it. Any thoughts?

    Read the article

  • MS-DOS application sending screen output to LPT printer

    - by gadget00
    We have a MS-DOS application(coded in FoxPro), and recently had this glitch: the screen menu of the application without reason starts printing in an LPT Panasonic KX-1150 printer. It's a never ending print of all the screens of the application, as if the main output instead of sending it to the monitor, sends it to the printer! It creates a unnamed document with N/D pages and keeps printing forever. We have to turn the printer off and then kill the document in the spool to stop it... The printer is installed with a Generic/Text driver, and has happened to us both in WindowsXP and Win7. What can this be? Thanks in advance

    Read the article

  • Why does F@H not bind to more than one core on Windows?

    - by warren
    I have been contributing to Stanford's Folding@Home project for some time with most of the computers I own. I just installed the Windows client on a new machine running Windows 7, but see that the F@H process only binds to one CPU core. Is this due to it being run on Windows? (I have the 64-bit edition of Windows 7 installed.) On the Mac and under 64-bit Linux distros, it will run across all available CPU cores.

    Read the article

  • How to Profile R Code that Includes SNOW Cluster

    - by James
    Hi, I have a nested loop that I'm using foreach, DoSNOW, and a SNOW socket cluster to solve for. How should I go about profiling the code to make sure I'm not doing something grossly inefficient. Also is there anyway to measure the data flows going between the master and nodes in a Snow cluster? Thanks, James

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >