Search Results

Search found 7697 results on 308 pages for 'font lock'.

Page 231/308 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • MS SQL Specific Tables Hanging at Queries

    - by Jonn
    I have SQL Server 2008. I run a query in a table on a database. The weirdest thing keeps happening. I run a simple select statement on the table. I know there are 62 rows in the table but it gets stuck at row 48 and goes on "querying...". Waited already for hours and it didn't move on from there. I only know of two programs, and one reporting service connecting to that particular table and one other user. Does anyone have any idea on what could be causing this and how I could trace the source of the lock on that table? As a side note, I noted that the logs only had a notice that Autogrow failed the day before I checked. Could this have something to do with it?

    Read the article

  • Is MSDN referencing a system.thread, a worker thread, an io thread or all three?

    - by w0051977
    Please see the warning below taken from the StreamWriter class specification (http://msdn.microsoft.com/en-us/library/system.io.streamwriter.aspx): "Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe." I understand that a W3WC process contains two thread pools i.e. worker threads and I/O threads. A worker thread could contain many threads of its own (if the application creates its own System.Thread instances). Does the warning only relate to System.Threads or does it relate to worker threads and I/O threads as well I.e. as the instance variables of the streamwriter class are not thread safe then does this mean that there would be problems if multiple worker threads access it eg if two users on two different web clients attempt to write to the log file at the same time, then could one lock out the other?

    Read the article

  • Can an Oracle user get a list of its own running sessions without access to v$session?

    - by Nick Pierpoint
    I have an application that runs a process and I only want one process to run at a time. Some options are: Use an object lock to prevent subsequent processes running. This would be fine, but I want the calling session to return immediately and not wait for the running session to complete. Use a custom Y/N to set whether a process is running or not. I set a "Y" flag at the start of the process and set it to "N" when it finishes or fails. Also fine but feels like I'm re-inventing the wheel and doesn't feel like the way to go. It also falls short if the running session is killed as the flag stays at "Y". Use dbms_application_info.set_module This approach seems the most robust, but if I'm to know there's an existing running process I think I need to be able to query v$session and I don't want this application to have such wide access. Any ideas?

    Read the article

  • How to address thread-safety of service data used for maintaining static local variables in C++?

    - by sharptooth
    Consider the following scenario. We have a C++ function with a static local variable: void function() { static int variable = obtain(); //blahblablah } the function needs to be called from multiple threads concurrently, so we add a critical section to avoid concurrent access to the static local: void functionThreadSafe() { CriticalSectionLockClass lock( criticalSection ); static int variable = obtain(); //blahblablah } but will this be enough? I mean there's some magic that makes the variable being initialized no more than once. So there's some service data maintained by the runtime that indicates whether each static local has already been initialized. Will the critical section in the above code protect that service data as well? Is any extra protection required for this scenario?

    Read the article

  • Find -type d with no subfolders

    - by titatom
    Good morning ! This is a simple one I believe, but I am still a noob :) I am trying to find all folders with a certain name. I am able to do this with the command find /path/to/look/in/ -type d | grep .texturedata The output gives me lots of folders like this : /path/to/look/in/.texturedata/v037/animBMP But I would like it to stop at .texturedata : /path/to/look/in/.texturedata/ I have hundreds of these paths and would like to lock them down by piping the output of grep into chmod 000 I was given a command with the argument -dpe once, but I have no idea what it does and the Internet has not be able to help me determine it's usage Thanks you very much for your help !

    Read the article

  • Location inheritInChildApplications kill debugger?

    - by chobo2
    Hi I am wondering is this normal when you add this into your web.config <location path="." inheritInChildApplications="false"> </location> The debugger should stop working. Like when I add this to my site and try to run in debug mode it won't activate any of my debug points nor will it lock up Visual studios 2008. I can have it running and still make edits to my C# code. I take the line away and I get the debug mode back and it locks up VS2008.

    Read the article

  • Color in Cygwin terminal

    - by ForbesLindesay
    I've installed cygwin because I'm a bit fed up with the Windows terminal not being great. The only problem I'm having is the lack of colours. You can see the problem in the following 2 screenshots that display the same command: All I want is something which has a nice font, resizes properly (including proper behaviour when maximised) and support for colours. Ideally I'd like tabs too. This seems like a silly reason to end up buying a mac, so I'm hoping I can get all these things on windows somehow.

    Read the article

  • Freezing a listboxitem while items are being added

    - by siz
    We have a ListBox that has a number of items. Items are inserted into the ListBox via an ObservableCollection. Some of these items can be edited right in the ListBox. However, if an item is added at an index < the edited item's index, the entire content of the ListBox moves down. What we'd like to do is the following: if an item is in edit mode, we'd like to freeze its position on the screen. It is fine if items are added to the collection and the UI around the item changes. But the position of the item should remain constant on the screen. The only thing I've been able to do so far is attach to the ScrollChanged event and, at most, use either BringIntoView or ScrollIntoView methods to ensure that the item is always displayed somewhere in the UI, but I am unable to lock down its position. Has anyone done something like this and help out?

    Read the article

  • Exception in C#

    - by user1803513
    I am facing null pointer exception in below code as it is happening very rarely and I tried to debug to replicate the issue but no luck. Can anybody help me what can cause null point exception here. private static void MyTaskCompletedCallback(IAsyncResult res) { var worker = (AsyncErrorDelegate)((AsyncResult)res).AsyncDelegate; var async = (AsyncOperation)asyncResult.AsyncState; worker.EndInvoke(res); lock (IsAsyncOpOccuring) { IsBusy = false; } var completedArgs = new AsyncCompletedEventArgs(null, false, null); async.PostOperationCompleted(e => OnTaskCompleted((AsyncCompletedEventArgs)e), completedArgs); } Null Pointer exception is reported at var async = (AsyncOperation)asyncResult.AsyncState;

    Read the article

  • Linux virtualized screen resolution

    - by vladev
    Hopefully there is a positive answer to this question: I have a 15.4" laptop with native screen resolution of 1920x1200. You can imagine that everything is completely unreadable by default. If I increase the font size it becomes readable, but ugly. Is it possible to set the "real" resolution to 1920x1200 so it plays nice with the monitor, but set some "virtual" resolution of 1440x900 so that everything starts looking nice. Note: If I just change the resolution to 1440x900 everything becomes blurry, since this is not the monitor's default resolution. I know that having a small monitor with high resolution is not very optimal - not my choice. (Using nvidia GF8400M)

    Read the article

  • Best way of obfuscating / encrypting form data on the iPhone

    - by cannyboy
    I want to create an app which holds sensitive information (imagine it's bank account details, thought it's not). The user enters this information on a form the first time the app starts up. I want this info to be saved, and available, any time the user uses the app (without having to enter a password). However, if the iPhone has a password lock on it, and is stolen, I don't want the data to be easily accessible from the file system. What is the best way of encrypting or obfuscating the data? There is not a lot of data, just a dozen NSStrings from the UITextFields on the form. I'm aware there are encryption export restrictions on the iPhone for non-US developers (I am in UK), so I would prefer to avoid going jumping through any of Apple's app submission hoops to get it on the store.

    Read the article

  • Javascript auto calculating

    - by Josh
    I have page that automatically calculates a Total by entering digits into the fields or pressing the Plus or Minus buttons. I need to add a second input after the Total that automatically divides the total by 25. Here is the working code with no JavaScript value for the division part of the code: <html> <head> <script language="text/javascript"> function Calc(className){ var elements = document.getElementsByClassName(className); var total = 0; for(var i = 0; i < elements.length; ++i){ total += parseFloat(elements[i].value); } document.form0.total.value = total; } function addone(field) { field.value = Number(field.value) + 1; Calc('add'); } function subtractone(field) { field.value = Number(field.value) - 1; Calc('add'); } </script> </head> <body> <form name="form0" id="form0"> 1: <input type="text" name="box1" id="box1" class="add" value="0" onKeyUp="Calc('add')" onChange="updatesum()" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box1);"> <input type="button" value=" - " onclick="subtractone(box1);"> <br /> 2: <input type="text" name="box2" id="box2" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box2);"> <input type="button" value=" - " onclick="subtractone(box2);"> <br /> 3: <input type="text" name="box3" id="box3" class="add" value="0" onKeyUp="Calc('add')" onClick="this.focus();this.select();" /> <input type="button" value=" + " onclick="addone(box3);"> <input type="button" value=" - " onclick="subtractone(box3);"> <br /> <br /> Total: <input readonly style="border:0px; font-size:14; color:red;" id="total" name="total"> <br /> Totaly Divided by 25: <input readonly style="border:0px; font-size:14; color:red;" id="divided" name="divided"> </form> </body></html> I have the right details but the formulas I am trying completely break other aspects of the code. I cant figure out how to make the auto adding and auto dividing work at the same time

    Read the article

  • How can I determine new & previous cell value on SheetChange event in Excel?

    - by Falco Foxburr
    I have some special cells in my Excel workbooks which are managed by my Excel Add-in. I want to prevent users from changing content of those cells, but I also want to know, what value users wanted to enter to those cells. On the SheetChange event I can check what users entered to my special cells, but how do I determine the PREVIOUS value in those cells and REVERT user changes? It is not a solution for me. If I lock cell in Excel, it becomes read-only - user can not even try to enter anything to this cell - Excel popups warning dialog in this case. My problem is that I want to catch what user entered to my cell, do something with this value, and then revert cell content to original value.

    Read the article

  • Radio Button Validation u

    - by Sirojan Gnanaretnam
    I am trying validate the radio button using Javascript . But I couldn't get it. Can any one please help me to fix this Issue. I Have attached My Code Below. Thanks. <form action="submitAd.php" method="POST" enctype="multipart/form-data" name="packages" onsubmit="return checkForm()"> <div id="plans_pay"> <input type="radio" name="group1" id="r1" value="Office" onchange="click_Pay_Office()" style="float:left;margin-top:20px;font-size:72px;"> <label style="float:left; margin-top:20px;" for="pay_office">At Our Office</label> <img style="float:left;margin-bottom:10px;" src="images/Pay-at-office.png" /> </div> <div id="plans_pay"> <input style="float:left;margin-top:20px;font-size:72px;" type="radio" name="group1" id="r2" value="HNB" onchange="click_Pay_Hnb()"> <label style="float:left; margin-top:20px;" for="pay_hnb">At Any HNB Branch</label> <img style="float:left;margin-bottom:10px;" src="images/HNB.png" /> </div> </form> Javascript function checkForm(){ if( document.packages.pso.checked == false && document.packages.pso1.checked == false && document.packages.ph.checked == false && document.packages.ph2.checked == false && document.packages.ph3.checked == false && document.packages.pl.checked == false && document.packages.p3.checked == false && document.packages.p4.checked == false && document.packages.p5.checked == false && document.packages.p6.checked == false ){ alert('Please Select At Least One Package'); return false; } if( document.packages.pso.checked == false && document.packages.pso1.checked == false && document.packages.ph.checked == false && document.packages.ph.checked == false && document.packages.ph2.checked == false && document.packages.ph3.checked == false && document.packages.pl.checked == false && document.packages.p3.checked == false && document.packages.p4.checked == false && document.packages.p5.checked == false && document.packages.p6.checked == false){ alert('Please Select At Least One with the Advertise online option in premium package'); return false; } if(document.getElementById('words').value==''){ alert("Please Enter the Texts"); return false; } if(document.getElementById('r1').checked==false && document.getElementById('r2').checked==false){ alert("Please Select a Payment Method"); return false; } }

    Read the article

  • Break the limit of threading, segmentation fault

    - by user353573
    use pthread_create to create limited number of threads running concurrently Successfully compile and run However, after adding function pointer array to run the function, Segmentation fault Where is wrong? workserver number: 0 Segmentation fault void* workserver(void arg) { int status; while(true) { printf("workserver number: %d\n", (int)arg); ( job_queue[(int)arg])(); sleep(3); status = pthread_mutex_lock(&data.mutex); if(status != 0) printf("%d lock mutex", status); data.value = 1; status = pthread_cond_signal(&data.cond); if(status != 0) printf("%d signal condition", status); status = pthread_mutex_unlock(&data.mutex); if(status != 0) printf("%d unlock mutex", status); } }

    Read the article

  • C#, Manage concurrency in database access

    - by Goul
    Hi there, I have written a while ago an application used by multiple users to handle trades creation. I haven't done development for some time now and can't remember how I managed the concurrency between the users and so would have liked your advices in term of design. The application was as follow: - One heavy client per user - A single database - Access to the database for each user to insert/update/delete trades - A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal. My questions: 1- Do you confirm I shouldn't care about the connection to the database for each application. Considering that there is a singleton in each, I would expect on connexion per client with no issue. 2- How preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to. 3- How to have the grid automatically updated whenever the database is (by another user for example)? Thank you in advance for your help!

    Read the article

  • Queues And Wait Handles in C#

    - by Michael Covelli
    I've had the following code in my application for some years and have never seen an issue from it. while ((PendingOrders.Count > 0) || (WaitHandle.WaitAny(CommandEventArr) != 1)) { lock (PendingOrders) { if (PendingOrders.Count > 0) { fbo = PendingOrders.Dequeue(); } else { fbo = null; } } // Do Some Work if fbo is != null } Where CommandEventArr is made up of the NewOrderEvent (an auto reset event) and the ExitEvent (a manual reset event). But I just realized today that its not thread safe at all. If this thread gets interrupted right after the first (PendingOrder.Count 0) check has returned false. And then the other thread both enqueues an order and sets the NewOrderEvent before I get a chance to wait on it, the body of the while loop will never run. What's the usual pattern used with a Queue and an AutoResetEvent to fix this and do what I'm trying to do with the code above?

    Read the article

  • Serialization of a TChan String

    - by J Fritsch
    I have declared the following type KEY = (IPv4, Integer) type TPSQ = TVar (PSQ.PSQ KEY POSIXTime) type TMap = TVar (Map.Map KEY [String]) data Qcfg = Qcfg { qthresh :: Int, tdelay :: Rational, cwpsq :: TPSQ, cwmap :: TMap, cw chan :: TChan String } deriving (Show) and would like this to be serializable in a sense that Qcfg can either be written to disk or be sent over the network. When I compile this I get the error No instances for (Show TMap, Show TPSQ, Show (TChan String)) arising from the 'deriving' clause of a data type declaration Possible fix: add instance declarations for (Show TMap, Show TPSQ, Show (TChan String)) or use a standalone 'deriving instance' declaration, so you can specify the instance context yourself When deriving the instance for (Show Qcfg) I am now not quite sure whether there is a chance at all to serialize my TChan although all individual nodes in it are members of the show class. For TMap and TPSQ I wonder whether there are ways to show the values in the TVar directly (because it does not get changed, so there should no need to lock it) without having to declare an instance that does a readTVar ?

    Read the article

  • Insert a row and avoiding race condition (PHP/MySQL)

    - by justkevin
    I'm working on a multiplayer game which has a lobby-like area where players select "sectors" to enter. The lobby gateway is powered by PHP, while actual gameplay is handled by one or more Java servers. The datastore is MySQL. The happy path: A player chooses a sector and tells the lobby he'd like to enter. The lobby checks whether this is okay, including checking whether there are too many players in the sector (compares the entry count in sector assignments for that sector against the sector's max_players value). The player is added to the sector_assignments table pairing him with the sector. The player client receives a passkey that will let him connect to the appropriate game server. The race condition: If two players request access to the same sector at close to same time, I can envision a case where they are both added because there was one space free when their check was started and max players gets exceeded. Is the best solution LOCK TABLE on sector_assignments? Is there another option?

    Read the article

  • Getting deadlocks in MySQL

    - by at
    We're very frustratingly getting deadlocks in MySQL. It isn't because of exceeding a lock timeout as the deadlocks happen instantly when they do happen. Here's the SQL code that is executing on 2 separate threads (with 2 separate connections from the connection pool) that produces a deadlock: UPDATE Sequences SET Counter = LAST_INSERT_ID(Counter + 1) WHERE Sequence IS NULL Sequences table has 2 columns: Sequence and Counter The LAST_INSERT_ID allows us to retrieve this updated counter value as per MySQL's recommendation. That works perfect for us, but we get these deadlocks! Why are we getting them and how can we avoid them?? Thanks so much for any help with this.

    Read the article

  • monitor and kill runaway processes using 100% IO?

    - by bleomycin
    Hello everyone, i have a few processes that have to be run at high priority (chrt 98) that will occasionally decide to hard-lock and peg 1 core at 100% (not a huge deal) but more importantly it will use all the IO on a system, so much that its impossible to log into the machine via ssh to kill it or perform any task on the machine that isn't loaded into ram. If i happen to have something like htop already running i am able to end the process fine. Is there any type of utility/way to monitor for this type of runaway process and kill anything that uses 100% of system IO for more than X amount of time? Thanks!

    Read the article

  • Parallelism in .NET – Part 3, Imperative Data Parallelism: Early Termination

    - by Reed
    Although simple data parallelism allows us to easily parallelize many of our iteration statements, there are cases that it does not handle well.  In my previous discussion, I focused on data parallelism with no shared state, and where every element is being processed exactly the same. Unfortunately, there are many common cases where this does not happen.  If we are dealing with a loop that requires early termination, extra care is required when parallelizing. Often, while processing in a loop, once a certain condition is met, it is no longer necessary to continue processing.  This may be a matter of finding a specific element within the collection, or reaching some error case.  The important distinction here is that, it is often impossible to know until runtime, what set of elements needs to be processed. In my initial discussion of data parallelism, I mentioned that this technique is a candidate when you can decompose the problem based on the data involved, and you wish to apply a single operation concurrently on all of the elements of a collection.  This covers many of the potential cases, but sometimes, after processing some of the elements, we need to stop processing. As an example, lets go back to our previous Parallel.ForEach example with contacting a customer.  However, this time, we’ll change the requirements slightly.  In this case, we’ll add an extra condition – if the store is unable to email the customer, we will exit gracefully.  The thinking here, of course, is that if the store is currently unable to email, the next time this operation runs, it will handle the same situation, so we can just skip our processing entirely.  The original, serial case, with this extra condition, might look something like the following: foreach(var customer in customers) { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) break; customer.LastEmailContact = DateTime.Now; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re processing our loop, but at any point, if we fail to send our email successfully, we just abandon this process, and assume that it will get handled correctly the next time our routine is run.  If we try to parallelize this using Parallel.ForEach, as we did previously, we’ll run into an error almost immediately: the break statement we’re using is only valid when enclosed within an iteration statement, such as foreach.  When we switch to Parallel.ForEach, we’re no longer within an iteration statement – we’re a delegate running in a method. This needs to be handled slightly differently when parallelized.  Instead of using the break statement, we need to utilize a new class in the Task Parallel Library: ParallelLoopState.  The ParallelLoopState class is intended to allow concurrently running loop bodies a way to interact with each other, and provides us with a way to break out of a loop.  In order to use this, we will use a different overload of Parallel.ForEach which takes an IEnumerable<T> and an Action<T, ParallelLoopState> instead of an Action<T>.  Using this, we can parallelize the above operation by doing: Parallel.ForEach(customers, (customer, parallelLoopState) => { // Run some process that takes some time... DateTime lastContact = theStore.GetLastContact(customer); TimeSpan timeSinceContact = DateTime.Now - lastContact; // If it's been more than two weeks, send an email, and update... if (timeSinceContact.Days > 14) { // Exit gracefully if we fail to email, since this // entire process can be repeated later without issue. if (theStore.EmailCustomer(customer) == false) parallelLoopState.Break(); else customer.LastEmailContact = DateTime.Now; } }); There are a couple of important points here.  First, we didn’t actually instantiate the ParallelLoopState instance.  It was provided directly to us via the Parallel class.  All we needed to do was change our lambda expression to reflect that we want to use the loop state, and the Parallel class creates an instance for our use.  We also needed to change our logic slightly when we call Break().  Since Break() doesn’t stop the program flow within our block, we needed to add an else case to only set the property in customer when we succeeded.  This same technique can be used to break out of a Parallel.For loop. That being said, there is a huge difference between using ParallelLoopState to cause early termination and to use break in a standard iteration statement.  When dealing with a loop serially, break will immediately terminate the processing within the closest enclosing loop statement.  Calling ParallelLoopState.Break(), however, has a very different behavior. The issue is that, now, we’re no longer processing one element at a time.  If we break in one of our threads, there are other threads that will likely still be executing.  This leads to an important observation about termination of parallel code: Early termination in parallel routines is not immediate.  Code will continue to run after you request a termination. This may seem problematic at first, but it is something you just need to keep in mind while designing your routine.  ParallelLoopState.Break() should be thought of as a request.  We are telling the runtime that no elements that were in the collection past the element we’re currently processing need to be processed, and leaving it up to the runtime to decide how to handle this as gracefully as possible.  Although this may seem problematic at first, it is a good thing.  If the runtime tried to immediately stop processing, many of our elements would be partially processed.  It would be like putting a return statement in a random location throughout our loop body – which could have horrific consequences to our code’s maintainability. In order to understand and effectively write parallel routines, we, as developers, need a subtle, but profound shift in our thinking.  We can no longer think in terms of sequential processes, but rather need to think in terms of requests to the system that may be handled differently than we’d first expect.  This is more natural to developers who have dealt with asynchronous models previously, but is an important distinction when moving to concurrent programming models. As an example, I’ll discuss the Break() method.  ParallelLoopState.Break() functions in a way that may be unexpected at first.  When you call Break() from a loop body, the runtime will continue to process all elements of the collection that were found prior to the element that was being processed when the Break() method was called.  This is done to keep the behavior of the Break() method as close to the behavior of the break statement as possible. We can see the behavior in this simple code: var collection = Enumerable.Range(0, 20); var pResult = Parallel.ForEach(collection, (element, state) => { if (element > 10) { Console.WriteLine("Breaking on {0}", element); state.Break(); } Console.WriteLine(element); }); If we run this, we get a result that may seem unexpected at first: 0 2 1 5 6 3 4 10 Breaking on 11 11 Breaking on 12 12 9 Breaking on 13 13 7 8 Breaking on 15 15 What is occurring here is that we loop until we find the first element where the element is greater than 10.  In this case, this was found, the first time, when one of our threads reached element 11.  It requested that the loop stop by calling Break() at this point.  However, the loop continued processing until all of the elements less than 11 were completed, then terminated.  This means that it will guarantee that elements 9, 7, and 8 are completed before it stops processing.  You can see our other threads that were running each tried to break as well, but since Break() was called on the element with a value of 11, it decides which elements (0-10) must be processed. If this behavior is not desirable, there is another option.  Instead of calling ParallelLoopState.Break(), you can call ParallelLoopState.Stop().  The Stop() method requests that the runtime terminate as soon as possible , without guaranteeing that any other elements are processed.  Stop() will not stop the processing within an element, so elements already being processed will continue to be processed.  It will prevent new elements, even ones found earlier in the collection, from being processed.  Also, when Stop() is called, the ParallelLoopState’s IsStopped property will return true.  This lets longer running processes poll for this value, and return after performing any necessary cleanup. The basic rule of thumb for choosing between Break() and Stop() is the following. Use ParallelLoopState.Stop() when possible, since it terminates more quickly.  This is particularly useful in situations where you are searching for an element or a condition in the collection.  Once you’ve found it, you do not need to do any other processing, so Stop() is more appropriate. Use ParallelLoopState.Break() if you need to more closely match the behavior of the C# break statement. Both methods behave differently than our C# break statement.  Unfortunately, when parallelizing a routine, more thought and care needs to be put into every aspect of your routine than you may otherwise expect.  This is due to my second observation: Parallelizing a routine will almost always change its behavior. This sounds crazy at first, but it’s a concept that’s so simple its easy to forget.  We’re purposely telling the system to process more than one thing at the same time, which means that the sequence in which things get processed is no longer deterministic.  It is easy to change the behavior of your routine in very subtle ways by introducing parallelism.  Often, the changes are not avoidable, even if they don’t have any adverse side effects.  This leads to my final observation for this post: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >