Search Results

Search found 3765 results on 151 pages for 'matthew lock'.

Page 25/151 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • NSLock deadlock

    - by twinkle
    I have an instance variable in class Foo @property (nonatomic, retain) NSLock *mLock; initialized as: self.mLock=[NSLock new]; Foo also has -(void)getLock { while (![self.mLock tryLock]) { NSLog(@"Trying to lock... sleep(1)"); sleep(1); } NSLog(@">>>>>>> Acquiring LOCK"); [self.mLock lock]; NSLog(@">>>>>>> LOCK acquired"); } From another method in the Foo class, I call [Foo getLock]. This immediately results in a deadlock. Log below: 2010-03-18 07:06:01.660 test[9816:207] >>>>>>> Acquiring LOCK 2010-03-18 07:06:01.665 test[9816:207] *** -[NSLock lock]: deadlock (<NSLock: 0x3c0f820> '(null)') 2010-03-18 07:06:01.666 test[9816:207] *** Break on _NSLockError() to debug. Thanks!

    Read the article

  • How is thread synchronization implemented, at the assembly language level?

    - by Martin
    While I'm familiar with concurrent programming concepts such as mutexes and semaphores, I have never understood how they are implemented at the assembly language level. I imagine there being a set of memory "flags" saying: lock A is held by thread 1 lock B is held by thread 3 lock C is not held by any thread etc But how is access to these flags synchronized between threads? Something like this naive example would only create a race condition: mov edx, [myThreadId] wait: cmp [lock], 0 jne wait mov [lock], edx ; I wanted an exclusive lock but the above ; three instructions are not an atomic operation :(

    Read the article

  • Is it safe to lock a static variable in a non-static class?

    - by Dario Solera
    I've got a class that manages a shared resource. Now, since access to the resource depends on many parameters, this class is instantiated and disposed several times during the normal execution of the program. The shared resource does not support concurrency, so some kind of locking is needed. The first thing that came into my mind is having a static instance in the class, and acquire locks on it, like this: // This thing is static! static readonly object MyLock = new object(); // This thing is NOT static! MyResource _resource = ...; public DoSomeWork() { lock(MyLock) { _resource.Access(); } } Does that make sense, or would you use another approach?

    Read the article

  • Can i lock a record from a join sql statment using ROWLOCK,UPDLOCK ?

    - by Andrea.Ko
    I have a store procedure to get the data i want: SELECT a.SONum, a.Seq1, a.SptNum, a.Qty1, a.SalUniPriP, a.PayNum, a.InvNum, a.BLNum, c.ETD, c.ShpNum, f.IssBan FROM OrdD a JOIN OrdH b ON a.SONum = b.SONum LEFT JOIN Invh c ON a.InvNum = c.InvNum LEFT JOIN cus d ON b.CusCod = d.CusCod LEFT JOIN BL e ON a.BLNum = e.BLNum LEFT JOIN PayMasH f ON f.PayNum = a.PayNum LEFT JOIN Shipment g ON g.ShpNum = c.ShpNum WHERE b.CusCod IN (SELECT CusCod FROM UsrInc WHERE UseID=@UserID and UseLev=@UserLvl) AND d.CusGrp = @CusGrp After i get those records into cursor, i used RowLock, UpdLock to lock all the related invoice number. SELECT InvNum FROM Invh WITH (ROWLOCK,UPDLOCK) WHERE InvNum = Can i issue locking on the table INVH at the point i select the table from a few table using join command? Any advice, please!

    Read the article

  • ReaderWriterLockSlim question.

    - by Kamarey
    There are lots written about the ReaderWriterLockSlim class which allows multiple read and a single write. All of these (at least that I had found) tell how to use it without much explanation why and how it works. The standard code sample is: lock.EnterUpgradeableReadLock(); try { if (test if write is required) { lock.EnterWriteLock(); try { change the resourse here. } finally { lock.ExitWriteLock(); } } } finally { lock.ExitUpgradeableReadLock(); } The question is: if upgradeable lock permits only a single thread to enter its section, why I should call EnterWriteLock method within? What will happen if I don't? Or what will happen if instead of EnterUpgradeableReadLock I will call EnterWriteLock and will write to a resource without using upgradeable lock at all?

    Read the article

  • Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

    - by Tom
    From a script I sent a query like this thousands of times to my local database: update some_table set some_column = some_value I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times. I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since them, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message: Lock wait timeout exceeded; try restarting transaction It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?

    Read the article

  • Windows Mobile 6.x How to explicitly lock the app in one orientation?

    - by Stuart
    I'm trying to get an app on the WinMo App Store. As part of this, Microsoft App's Store has asked that I need to support landscape as well as portrait. What they've said is OK is: If dynamic switching is implicitly allowed, the app will be tested just as if it supports both portrait and landscape even if only a portrait or landscape resolution is checked. You may explicitly lock the app in one orientation (which means portrait mode if the app does not handle landscape mode functions), provided the default OS orientation is preserved once the app exits. I'd love to do answer 2 - but I can't find any way of doing it - and they won't provide me any other clues - they suggested I ask on the forums... so here I am on Stack Overflow - far better than on the forums :) Anyone got any suggestions?

    Read the article

  • Anyway to find out the current Windows is in lock mode?

    - by David.Chu.ca
    I have a windows application written in VS 2005. The application makes queries against to sql database in a timer cycle every 2 minutes. If there any data changes, the window will be refreshed with new data. If the user leaves the window, the windows will be automatically locked after a while. There is no sense to keep querying data in ever 2 minutes when the windows is locked; therefore I would like to stop the query when lock is on so that the network data trafic will be reduced and also saves the current windows resources such as memory and CPUs. I am not sure if there is any way to find out the current windows is locked? Not sure if there is any Windows APIs for this purpose if no .Net classes available? My project is in .Net 2.0 and all users are in Windows XP.

    Read the article

  • Why timed lock doesnt throws a timeout exception in C++0x?

    - by Vicente Botet Escriba
    C++0x allows to lock on a mutex until a given time is reached, and return a boolean stating if the mutex has been locked or not. template <class Clock, class Duration> bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time); In some contexts, I consider an exceptional situation that the locking fails because of timeout. In this case an exception should be more appropriated. To make the difference a function lock_until could be used to get a timeout exception when the time is reached before locking. template <class Clock, class Duration> void lock_until(const chrono::time_point<Clock, Duration>& abs_time); Do you think that lock_until should be more adequate in some contexts? if yes, on which ones? If no, why try_lock_until will always be a better choice?

    Read the article

  • Will lock() statement block all threads in the proccess/appdomain?

    - by MikeJ
    Maybe the question sounds silly, but I don't understand 'something about threads and locking and I would like to get a confirmation (here's why I ask). So, if I have 10 servers and 10 request in the same time come to each server, that's 100 request across the farm. Without locking, thats 100 request to the database. If I do something like this: private static readonly object myLockHolder = new object(); if (Cache[key] == null) { lock(myLockHolder) { if (Cache[key] == null) { Cache[key] = LengthyDatabaseCall(); } } } How many database requests will I do? 10? 100? Or as much as I have threads?

    Read the article

  • Can I lock rows in a cursor if the cursor only returns a single count(*) row?

    - by RenderIn
    I would like to restrict users from inserting more than 3 records with color = 'Red' in my FOO table. My intentions are to A) retrieve the current count so that I can determine whether another record is allowed and B) prevent any other processes from inserting any Red records while this one is in process, hence the for update of. I'd like to do something like: cursor cur_cnt is select count(*) cnt from foo where foo.color = 'Red' for update of foo.id; Will this satisfy both my requirements or will it not lock only the rows in the count(*) who had foo.color = 'Red'?

    Read the article

  • How can I lock the cursor to the inside of a window on Linux?

    - by ZorbaTHut
    I'm trying to put together a game for Linux which involves a lot of fast action and flinging around of the mouse cursor. If the user wants to play in windowed mode, I'd quite like to lock the cursor to the inside of the window to avoid accidentally changing programs in the heat of battle (obviously this will cancel itself if the user changes programs or hits escape for the pause menu.) On Windows, this can be accomplished easily with ClipCursor(). I can't find an equivalent on Linux. Is there one? I plan to do this in pure X code, but obviously if anyone knows of a way to do this in any Linux windowing library then I can just read the source code and figure out how to duplicate it in X.

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • How to find out where a thread lock happend?

    - by SchlaWiener
    One of our company's Windows Forms application had a strange problem for several month. The app worked very reliable for most of our customers but on some PC's (mostly with a wireless lan connection) the app sometimes just didn't respond anymore. (You click on the UI and windows ask you to wait or kill the app). I wasn't able to track down the problem for a long time but now I figured out what happend. The app had this line of code // don't blame me for this. Wasn't my code :D Control.CheckForIllegalCrossThreadCalls = false and used some background threads to modify the controls. No I found a way to reproduce the application stopping responding bug on my dev machine and tracked it down to a line where I actually used Invoke() to run a task in the main thread. Me.Invoke(MyDelegate, arg1, arg2) Obviously there was a thread lock somewhere. After removing the Control.CheckForIllegalCrossThreadCalls = false statement and refactoring the whole programm to use Invoke() if modifying a control from a background thread, the problem is (hopefully) gone. However, I am wondering if there is a way to find such bugs without debugging every line of code (Even if I break into debugger after the app stops responding I can't tell what happend last, because the IDE didn't jump to the Invoke() statement) In other words: If my apps hangs how can I figure out which line of code has been executed last? Maybe even on the customers PC. I know VS2010 offers some backwards debugging feature, maybe that would be a solution, but currently I am using VS2008.

    Read the article

  • C#, Can I check on a lock without trying to acquire it?

    - by Biff MaGriff
    Hello, I have a lock in my c# web app that prevents users from running the update script once it has started. I was thinking I would put a notification in my master page to let the user know that the data isn't all there yet. Currently I do my locking like so. protected void butRefreshData_Click(object sender, EventArgs e) { Thread t = new Thread(new ParameterizedThreadStart(UpdateDatabase)); t.Start(this); //sleep for a bit to ensure that javascript has a chance to get rendered Thread.Sleep(100); } public static void UpdateDatabase(object con) { if (Monitor.TryEnter(myLock)) { Updater.RepopulateDatabase(); Monitor.Exit(myLock); } else { Common.RegisterStartupScript(con, AlreadyLockedJavaScript); } } And I do not want to do if(Monitor.TryEnter(myLock)) Monitor.Exit(myLock); else //show processing labal As I imagine there is a slight possibility that it might display the notification when it isn't actually running. Is there an alternative I can use? Edit: Hi Everyone, thanks a lot for your suggestions! Unfortunately I couldn't quite get them to work... However I combined the ideas on 2 answers and came up with my own solution.

    Read the article

  • How should I lock the table in this VB6 / Access application?

    - by Brian Hooper
    I'm working on a VB6 application using an Access database. The application writes messages to a log table from time to time. Several instances of the application may be running simultaneously and to distinguish them they each have their own run number. The run number is deduced from the log table thus... Set record_set = New ADODB.Recordset query_string = "SELECT MAX(RUN_NUMBER) + 1 AS NEW_RUN_NUMBER FROM ERROR_LOG" record_set.CursorLocation = adUseClient record_set.Open query_string, database_connection, adOpenStatic, , adCmdText record_set.MoveLast If IsNull(record_set.Fields("NEW_RUN_NUMBER")) Then run_number = 0 Else run_number = record_set.Fields("NEW_RUN_NUMBER") End If command_string = "INSERT INTO ERROR_LOG (RUN_NUMBER, SEVERITY, MESSAGE) " & _ " VALUES (" & Str$(run_number) & ", " & _ " " & Str$(SEVERITY_INFORMATION) & ", " & _ " 'Run Started'); " database_connection.Execute command_string Obviously there is a small gap between the calculation of the run number and the appearance of the new row in the database, and to prevent another instance getting access between the two operations I'd like to lock the table; something along the lines of SET TRANSACTION READ WRITE RESERVING ERROR_LOG FOR PROTECTED WRITE; How should I go about doing this? Would locking the recordset do any good (the row in the record set doesn't match any particular row in the database)?

    Read the article

  • Microbenchmark showing process-switching faster than thread-switching; what's wrong?

    - by Yang
    I have two simple microbenchmarks trying to measure thread- and process-switching overheads, but the process-switching overhead. The code is living here, and r1667 is pasted below: https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/process_switch_bench.c // on zs, ~2.1-2.4us/switch #include <stdlib.h> #include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <semaphore.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> #include <sys/time.h> #include <pthread.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; sem_t *s0, *s1, *s2; void * threads ( void * unused ) { // Wait till we may fire away sem_wait(s2); for (;;) { pthread_mutex_lock(&LOCK); pthread_mutex_unlock(&LOCK); COUNTER++; sem_post(s0); sem_wait(s1); } return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_mutex_init(&LOCK, NULL); COUNTER = 0; s0 = sem_open("/s0", O_CREAT, 0022, 0); if (s0 == 0) { perror("sem_open"); exit(1); } s1 = sem_open("/s1", O_CREAT, 0022, 0); if (s1 == 0) { perror("sem_open"); exit(1); } s2 = sem_open("/s2", O_CREAT, 0022, 0); if (s2 == 0) { perror("sem_open"); exit(1); } int x, y, z; sem_getvalue(s0, &x); sem_getvalue(s1, &y); sem_getvalue(s2, &z); printf("%d %d %d\n", x, y, z); pid_t pid = fork(); if (pid) { pthread_create(&t1, NULL, threads, NULL); pthread_detach(t1); // Get start time and fire away start = timeInMS(); sem_post(s2); sem_post(s2); // Wait for about a second sleep(1); // Stop thread pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of process switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); // clean up kill(pid, 9); wait(0); sem_close(s0); sem_close(s1); sem_unlink("/s0"); sem_unlink("/s1"); sem_unlink("/s2"); } else { if (1) { sem_t *t = s0; s0 = s1; s1 = t; } threads(0); // never return } return 0; } https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/thread_switch_bench.c // From <http://stackoverflow.com/questions/304752/how-to-estimate-the-thread-context-switching-overhead> // on zs, ~4-5us/switch; tried making COUNTER updated only by one thread, but no difference #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <sys/time.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; pthread_cond_t CONDITION; void * threads ( void * unused ) { // Wait till we may fire away pthread_mutex_lock(&START); pthread_mutex_unlock(&START); int first=1; pthread_mutex_lock(&LOCK); // If I'm not the first thread, the other thread is already waiting on // the condition, thus Ihave to wake it up first, otherwise we'll deadlock if (COUNTER > 0) { pthread_cond_signal(&CONDITION); first=0; } for (;;) { if (first) COUNTER++; pthread_cond_wait(&CONDITION, &LOCK); // Always wake up the other thread before processing. The other // thread will not be able to do anything as long as I don't go // back to sleep first. pthread_cond_signal(&CONDITION); } pthread_mutex_unlock(&LOCK); return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_t t2; pthread_mutex_init(&LOCK, NULL); pthread_mutex_init(&START, NULL); pthread_cond_init(&CONDITION, NULL); pthread_mutex_lock(&START); COUNTER = 0; pthread_create(&t1, NULL, threads, NULL); pthread_create(&t2, NULL, threads, NULL); pthread_detach(t1); pthread_detach(t2); // Get start time and fire away start = timeInMS(); pthread_mutex_unlock(&START); // Wait for about a second sleep(1); // Stop both threads pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of thread switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); return 0; }

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • SQL SERVER – CTE can be Updated

    - by Pinal Dave
    Today I have received a fantastic email from Matthew Spieth. SQL Server expert from Ohio. He recently had a great conversation with his colleagues in the office and wanted to make sure that everybody who reads this blog knows about this little feature which is commonly confused. Here is his statement and we will start our story with Matthew’s own statement: “Users often confuse CTE with Temp Table but technically they both are different, CTE are like Views and they can be updated just like views.“ Very true statement from Matthew. I totally agree with what he is saying. Just like him, I have enough, time came across a situation when developers think CTE is like temp table. When you update temp table, it remains in the scope of the temp table and it does not propagate it to the table based on which temp table is built. However, this is not the case when it is about CTE, when you update CTE, it updates underlying table just like view does. Here is the working example of the same built by Matthew to illustrate this behavior. Check the value in the base table first. USE AdventureWorks2012; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; Now let us build CTE with the same data. ;WITH CTEUpd(ProductID, Name, ProductNumber, Color) AS( SELECT ProductID, Name, ProductNumber, Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738') Now let us update CTE with following code. -- Update CTE UPDATE CTEUpd SET Color = 'Rainbow'; Now let us check the BASE table based on which the CTE was built. -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; That’s it! You can update CTE and it will update the base table. Here is the script which you should execute all together. USE AdventureWorks2012; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; -- Build CTE ;WITH CTEUpd(ProductID, Name, ProductNumber, Color) AS( SELECT ProductID, Name, ProductNumber, Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738') -- Update CTE UPDATE CTEUpd SET Color = 'Rainbow'; -- Check - The value in the base table is updated SELECT Color FROM [Production].[Product] WHERE ProductNumber = 'CA-6738'; If you are aware of such scenario, do let me know and I will post this on my blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL View, T SQL Tagged: CTE

    Read the article

  • The People Who Support Linux

    <b>Linux.com: </b>"The Linux Foundation's individual members help to support the work of Linux creator Linus Torvalds and other important activities that advance Linux, while getting a variety of other fun and valuable benefits. The series begins with Matthew Fernandez, a senior application developer based in Sydney, Australia. Matthew has been using Linux since 2001 and just recently became a Linux Foundation member."

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • how to run conky from terminal?

    - by Esmail0022
    http://www.unixmen.com/configure-con...t-howto-conky/ in this link there are 11 steps to get conky , i did all of them but the terminal show this message: The program 'conky' can be found in the following packages: * conky-cli * conky-std Try: sudo apt-get install and i try type this but saw this message: ismail@ismail-ASUS:~$ sudo apt-get install conky [sudo] password for ismail: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? Can you help me?

    Read the article

  • Is it possible to make the Numpad 5 button work as the Numpad Down button while Num Lock is on?

    - by Eddy
    I hope someone can help me out with this; I use a laptop and I like to play games a lot that are arrow key dependent. So after a while of using my laptop, I accidentally broke the Down Arrow key. So I decided to try and find out how to make the Numpad5 button work as the Numpad Down button while Numlock is on. I was trying to use AutoHotkey, but so far I've had no luck. Can anyone help me out with this? So far the scripts I used that didn't work were Numpad5::NumpadDown or Numpad5::Down. Am I doing something wrong here?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >