Search Results

Search found 3089 results on 124 pages for 'lock in'.

Page 4/124 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Persistent SQL Table lock from C#

    - by Chris
    I'm trying to create a persistent SQL (SQL Server 2005) lock on a table level. I'm not updating/querying the specified table, but I need to prevent a third party application from updating the locked table as a means to prevent transactions from being posted (the table I wish to lock is the key on their transaction that interferes with my processing). From my experience the table is only locked for the time a specific transaction is taking place. Any ideas? The 3rd party developer has logged this feature as an enhancement, but since they are in the middle of rolling out a major release I can expect to wait at least 6 months for this. I know that this isn't a great solution, since their software will fall over but it is of a critical enough nature that we're willing to live with the consequences.

    Read the article

  • ASP.NET lock thread method

    - by Peter
    Hello, I'm developing an ASP.NET forms webapplication using C#. I have a method which creates a new Order for a customer. It looks similar to this; private string CreateOrder(string userName) { // Fetch current order Order order = FetchOrder(userName); if (order.OrderId == 0) { // Has no order yet, create a new one order.OrderNumber = Utility.GenerateOrderNumber(); order.Save(); } return order; } The problem here is, it is possible that 1 customer in two requests (threads) could cause this method to be called twice while another thread is also inside this method. This can cause two orders to be created. How can I properly lock this method, so it can only be executed by one thread at a time per customer? I tried; Mutex mutex = null; private string CreateOrder(string userName) { if (mutex == null) { mutex = new Mutex(true, userName); } mutex.WaitOne(); // Code from above mutex.ReleaseMutex(); mutex = null; return order; } This works, but on some occasions it hangs on WaitOne and I don't know why. Is there an error, or should I use another method to lock? Thanks

    Read the article

  • ??ORACLE(?):PMON Release Lock

    - by Liu Maclean(???)
    ?????Oracle????????????PMON???????,??????ORACLE PROCESS,??cleanup dead process????release enqueue lock ,???cleanup latch? ????????????????, ????????????Pmon cleanup dead process?release lock??????????? ??Oracle=> MicroOracle, Maclean???????????Oracle behavior: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE    11.2.0.3.0      Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> select pid,program  from v$process;        PID PROGRAM ---------- ------------------------------------------------          1 PSEUDO          2 [email protected] (PMON)          3 [email protected] (PSP0)          4 [email protected] (VKTM)          5 [email protected] (GEN0)          6 [email protected] (DIAG)          7 [email protected] (DBRM)          8 [email protected] (PING)          9 [email protected] (ACMS)         10 [email protected] (DIA0)         11 [email protected] (LMON)         12 [email protected] (LMD0)         13 [email protected] (LMS0)         14 [email protected] (RMS0)         15 [email protected] (LMHB)         16 [email protected] (MMAN)         17 [email protected] (DBW0)         18 [email protected] (LGWR)         19 [email protected] (CKPT)         20 [email protected] (SMON)         21 [email protected] (RECO)         22 [email protected] (RBAL)         23 [email protected] (ASMB)         24 [email protected] (MMON)         25 [email protected] (MMNL)         26 [email protected] (MARK)         27 [email protected] (D000)         28 [email protected] (SMCO)         29 [email protected] (S000)         30 [email protected] (LCK0)         31 [email protected] (RSMN)         32 [email protected] (TNS V1-V3)         33 [email protected] (W000)         34 [email protected] (TNS V1-V3)         35 [email protected] (TNS V1-V3)         37 [email protected] (ARC0)         38 [email protected] (ARC1)         40 [email protected] (ARC2)         41 [email protected] (ARC3)         43 [email protected] (GTX0)         44 [email protected] (RCBG)         46 [email protected] (QMNC)         47 [email protected] (TNS V1-V3)         48 [email protected] (TNS V1-V3)         49 [email protected] (Q000)         50 [email protected] (Q001)         51 [email protected] (GCR0) SQL> drop table maclean; Table dropped. SQL> create table maclean(t1 int); Table created. SQL> insert into maclean values(1); 1 row created. SQL> commit; Commit complete. ?????????, ?????????:PID=2  PMONPID=11 LMONPID=18 LGWRPID=20 SMONPID=12 LMD ??????2???”enq: TX – row lock contention”?????,???KILL??????,??????PMON?recover dead process?release TX lock: PROCESS A: QL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD516B80 17880                            46 SQL> select distinct sid from v$mystat;        SID ----------         22 SQL> update maclean set t1=t1+1; 1 row updated. PROCESS B SQL> select addr,spid,pid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); ADDR             SPID                            PID ---------------- ------------------------ ---------- 00000000BD515AD0 17908                            45 SQL> update maclean set t1=t1+1; HANG.............. PROCESS B ??"enq: TX – row lock contention"?HANG? ????PROCESS C?? ?SMON?10500 event trace ??PMON?KST TRACE: SQL> set linesize 200 pagesize 1400 SQL> select * from v$lock where sid=22; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000BDCD7618 00000000BDCD7670         22 AE        100          0          4          0         48          2 00007F63268A9E28 00007F63268A9E88         22 TM      77902          0          3          0         32          2 00000000B9BB4950 00000000B9BB49C8         22 TX     458765        892          6          0         32          1 PROCESS A holde?ENQUEUE LOCK??? AE?TM?TX SQL> alter system switch logfile; System altered. SQL> alter system checkpoint; System altered. SQL> alter system flush buffer_cache; System altered. SQL> alter system set "_trace_events"='10000-10999:255:2,20,33'; System altered. SQL> ! kill -9 17880 KILL PROCESS A ???PROCESS B??update ?PMON ? PROCESS B ?errorstack ?KST TRACE????? SQL> oradebug setorapid 2; Oracle pid: 2, Unix process pid: 17533, image: [email protected] (PMON) SQL> oradebug dump errorstack 4; Statement processed. SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_pmon_17533.trc SQL> oradebug setorapid 45; Oracle pid: 45, Unix process pid: 17908, image: [email protected] (TNS V1-V3) SQL> oradebug dump errorstack 4; Statement processed. SQL>oradebug tracefile_name /s01/orabase/diag/rdbms/vprod/VPROD1/trace/VPROD1_ora_17908.trc ??PMON? KST TRACE: 2012-05-18 10:37:34.557225 :8001ECE8:db_trace:ktur.c@5692:ktugru(): [10444:2:1] next rollback uba: 0x00000000.0000.00 2012-05-18 10:37:34.557382 :8001ECE9:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=18 num=4 loc='ksa2.h LINE:285 ID:ksasnd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.557514 :8001ECEA:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TX-0007000d-0000037c mode=X 2012-05-18 10:37:34.558819 :8001ECF0:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=45 num=5 loc='kji.h LINE:3418 ID:kjata: wake up enqueue owner' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc='kjm.h LINE:1224 ID:kjmpost: post lmd' id1=0 id2=0 name=   type=0 2012-05-18 10:37:34.559271 :8001ECFC:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:0 2012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION - xid: 0x0007.00d.0000037c 2012-05-18 10:37:34.559327 :8001ECFF:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SX 2012-05-18 10:37:34.559365 :8001ED00:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.559908 :8001ED01:db_trace:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=S 2012-05-18 10:37:34.559982 :8001ED02:db_trace:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS 2012-05-18 10:37:34.560217 :8001ED03:db_trace:ksfd.c@15379:ksfdfods(): [10298:2:1] ksfdfods:fob=0xbab87b48 aiopend=0 2012-05-18 10:37:34.560336 :GSIPC:kjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118 2012-05-18 10:37:34.560357 :GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0x303 2012-05-18 10:37:34.560375 :8001ED04:db_trace:kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 2012-05-18 10:37:34.560939 :8001ED06:db_trace:kmm.c@10578:kmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.561091 :8001ED07:db_trace:kmm.c@10472:kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0) 2012-05-18 10:37:34.561100 :8001ED08:db_trace:kmm.c@11279:kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563325 :8001ED0B:db_trace:kmm.c@12511:kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0) 2012-05-18 10:37:34.563335 :8001ED0C:db_trace:kmm.c@12768:kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0x0) rflg(0x4) 2012-05-18 10:37:34.563354 :8001ED0D:db_trace:ksl2.c@2598:kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0x0 0/0x0 wait_id=78 seq_num=79 snap_id=1 PMON??dead process A??????????TX Lock:ksqrcl: release TX-0007000d-0000037c mode=X ?????Post Process B,??Process B ?acquire?TX lock???????:KSL POST SENT postee=45 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 Process B???PMON??????????ksl2.c@14563:ksliwat(): [10005:45:151] KSL POST RCVD poster=2 num=5 loc=’kji.h LINE:3418 ID:kjata: wake up enqueue owner’ id1=0 id2=0 name=   type=0 fac#=3 posted=0×3 may_be_posted=1kslwtbctx(): [10005:45:151] KSL WAIT BEG [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1kslwtectx(): [10005:45:151] KSL WAIT END [latch: ges resource hash list] 3162668560/0xbc827e10 91/0x5b 0/0×0 wait_id=14 seq_num=15 snap_id=1 ?RAC????POST LMD(lock Manager)??,????????GES??:2012-05-18 10:37:34.559047 :8001ECF8:db_trace:ksl2.c@16009:ksl_update_post_stats(): [10005:2:1] KSL POST SENT postee=12 num=6 loc=’kjm.h LINE:1224 ID:kjmpost: post lmd’ id1=0 id2=0 name=   type=0 ??ksqrcl: release TX????????:ksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??PMON abort Process A???Transaction2012-05-18 10:37:34.559291 :8001ECFD:db_trace:ktu.c@8652:ktudnx(): [10813:2:1] ktudnx: dec cnt xid:7.13.892 nax:0 nbx:02012-05-18 10:37:34.559301 :8001ECFE:db_trace:ktur.c@3198:ktuabt(): [10444:2:1] ABORT TRANSACTION – xid: 0×0007.00d.0000037c ??Process A?????maclean??TM lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release TM-0001304e-00000000 mode=SXksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??Process A?????AE ( Prevent Dropping an edition in use) lock:ksq.c@8540:ksqrcli(): [10704:2:1] ksqrcl: release AE-00000064-00000000 mode=Sksq.c@8826:ksqrcli(): [10704:2:1] ksqrcl: SUCCESS ??cleanup process Akjcs.c@4876:kjcsombdi(): GSIPC:SOD: 0xbc79e0c8 action 3 state 0 chunk (nil) regq 0xbc79e108 batq 0xbc79e118GSIPC:kjcs.c@5293:kjcsombdi(): GSIPC:SOD: exit cleanup for 0xbc79e0c8 rc: 1, loc: 0×303kss.c@1414:kssdch(): [10809:2:1] kssdch(0xbd516b80 = process, 3) 1 0 exit 0xbd516b80??PROCESS A ?paddr ???? kssdch???????? ??process???state object SO KSS: delete children of state obj. PMON ??kmmlrl()????instance goodness??update for session drop deltakmmlrl(): [10257:2:1] KMMLRL: Entering: flg(0×0) rflg(0×4)kmmlrl_process_events(): [10257:2:1] KMMLRL: Events: succ(3) wait(0) fail(0)kmmlrl(): [10257:2:1] KMMLRL: Reg/update: flg(0×0) rflg(0×4)kmmlrl(): [10257:2:1] KMMLRL: Update: ret(0)kmmlrl(): [10257:2:1] KMMLRL: Exiting: flg(0×0) rflg(0×4) ????????PMON???? 3s???”pmon timer”??kslwtbctx(): [10005:2:1] KSL WAIT BEG [pmon timer] 300/0x12c 0/0×0 0/0×0 wait_id=78 seq_num=79 snap_id=1

    Read the article

  • Well tested C/C++ lock free queue?

    - by uj
    I am looking for a well-tested, publicly available C/C++ implementation of a lock free queue. I need at least multiple-producers/single-consumer functionality. Multiple-consumers is even better, if exists. I'm targetting VC's _Interlocked... intrinsics, though anything which is straight forward to port would be fine. Could anyone give any pointers?

    Read the article

  • Lock free multiple readers single writer

    - by dummzeuch
    I have got an in memory data structure that is read by multiple threads and written by only one thread. Currently I am using a critical section to make this access threadsafe. Unfortunately this has the effect of blocking readers even though only another reader is accessing it. There are two options to remedy this: use TMultiReadExclusiveWriteSynchronizer do away with any blocking by using a lock free approach For 2. I have got the following so far (any code that doesn't matter has been left out): type TDataManager = class private FAccessCount: integer; FData: TDataClass; public procedure Read(out _Some: integer; out _Data: double); procedure Write(_Some: integer; _Data: double); end; procedure TDataManager.Read(out _Some: integer; out _Data: double); var Data: TDAtaClass; begin InterlockedIncrement(FAccessCount); try // make sure we get both values from the same TDataClass instance Data := FData; // read the actual data _Some := Data.Some; _Data := Data.Data; finally InterlockedDecrement(FAccessCount); end; end; procedure TDataManager.Write(_Some: integer; _Data: double); var NewData: TDataClass; OldData: TDataClass; ReaderCount: integer; begin NewData := TDataClass.Create(_Some, _Data); InterlockedIncrement(FAccessCount); OldData := TDataClass(InterlockedExchange(integer(FData), integer(NewData)); // now FData points to the new instance but there might still be // readers that got the old one before we exchanged it. ReaderCount := InterlockedDecrement(FAccessCount); if ReaderCount = 0 then // no active readers, so we can safely free the old instance FreeAndNil(OldData) else begin /// here is the problem end; end; Unfortunately there is the small problem of getting rid of the OldData instance after it has been replaced. If no other thread is currently within the Read method (ReaderCount=0), it can safely be disposed and that's it. But what can I do if that's not the case? I could just store it until the next call and dispose it there, but Windows scheduling could in theory let a reader thread sleep while it is within the Read method and still has got a reference to OldData. If you see any other problem with the above code, please tell me about it. This is to be run on computers with multiple cores and the above methods are to be called very frequently. In case this matters: I am using Delphi 2007 with the builtin memory manager. I am aware that the memory manager probably enforces some lock anyway when creating a new class but I want to ignore that for the moment. Edit: It may not have been clear from the above: For the full lifetime of the TDataManager object there is only one thread that writes to the data, not several that might compete for write access. So this is a special case of MREW.

    Read the article

  • Why lock statements don't scale

    - by Alex.Davies
    We are going to have to stop using lock statements one day. Just like we had to stop using goto statements. The problem is similar, they're pretty easy to follow in small programs, but code with locks isn't composable. That means that small pieces of program that work in isolation can't necessarily be put together and work together. Of course actors scale fine :) Why lock statements don't scale as software gets bigger Deadlocks. You have a program with lots of threads picking up lots of locks. You already know that if two of your threads both try to pick up a lock that the other already has, they will deadlock. Your program will come to a grinding halt, and there will be fire and brimstone. "Easy!" you say, "Just make sure all the threads pick up the locks in the same order." Yes, that works. But you've broken composability. Now, to add a new lock to your code, you have to consider all the other locks already in your code and check that they are taken in the right order. Algorithm buffs will have noticed this approach means it takes quadratic time to write a program. That's bad. Why lock statements don't scale as hardware gets bigger Memory bus contention There's another headache, one that most programmers don't usually need to think about, but is going to bite us in a big way in a few years. Locking needs exclusive use of the entire system's memory bus while taking out the lock. That's not too bad for a single or dual-core system, but already for quad-core systems it's a pretty large overhead. Have a look at this blog about the .NET 4 ThreadPool for some numbers and a weird analogy (see the author's comment). Not too bad yet, but I'm scared my 1000 core machine of the future is going to go slower than my machine today! I don't know the answer to this problem yet. Maybe some kind of per-core work queue system with hierarchical work stealing. Definitely hardware support. But what I do know is that using locks specifically prevents any solution to this. We should be abstracting our code away from the details of locks as soon as possible, so we can swap in whatever solution arrives when it does. NAct uses locks at the moment. But my advice is that you code using actors (which do scale well as software gets bigger). And when there's a better way of implementing actors that'll scale well as hardware gets bigger, only NAct needs to work out how to use it, and your program will go fast on it's own.

    Read the article

  • (Google AppEngine) Memcache Lock Entry

    - by Friedrich
    Hi, i need a locking in memcache. Since all operations are atomic that should be an easy task. My idea is to use a basic spin-lock mechanism. So every object that needs locking in memcache gets a lock object, which will be polled for access. // pseudo code // try to get a lock int lock; do { lock = Memcache.increment("lock", 1); } while(lock != 1) // ok we got the lock // do something here // and finally unlock Memcache.put("lock", 0); How does such a solution perform? Do you have a better idea how to lock a memcache object? Best regards, Friedrich Schick

    Read the article

  • Python Locking Implementation (with threading module)

    - by Matty
    This is probably a rudimentary question, but I'm new to threaded programming in Python and am not entirely sure what the correct practice is. Should I be creating a single lock object (either globally or being passed around) and using that everywhere that I need to do locking? Or, should I be creating multiple lock instances in each of the classes where I will be employing them. Take these 2 rudimentary code samples, which direction is best to go? The main difference being that a single lock instance is used in both class A and B in the second, while multiple instances are used in the first. Sample 1 class A(): def __init__(self, theList): self.theList = theList self.lock = threading.Lock() def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList): self.theList = theList self.lock = threading.Lock() self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": aList = [] for x in range(10): B(aList) A(aList).poll() Sample 2 class A(): def __init__(self, theList,lock): self.theList = theList self.lock = lock def poll(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.append(something) finally: self.lock.release() class B(threading.Thread): def __init__(self,theList,lock): self.theList = theList self.lock = lock self.start() def run(self): while True: # do some stuff that eventually needs to work with theList self.lock.acquire() try: self.theList.remove(something) finally: self.lock.release() if __name__ == "__main__": lock = threading.Lock() aList = [] for x in range(10): B(aList,lock) A(aList,lock).poll()

    Read the article

  • Java synchronized method lock on object, or method?

    - by wuntee
    If I have 2 synchronized methods in the same class, but each accessing different variables, can 2 threads access those 2 methods at the same time? Does the lock occur on the object, or does it get as specific as the variables inside the synchronized method? Example: class x{ private int a; private int b; public synchronized void addA(){ a++; } public synchronized void addB(){ b++; } } Can 2 threads access the same instance of class x performing x.addA() and x.addB() at the same time?

    Read the article

  • How can I lock screen on lxde

    - by maniat1k
    Like gnome Control + alt + L In Lxde how can i do that? What I have to intall to do this? thanks --searching for a solution on my own but... ok if I do alt+f2 and type xscreensaver-command -lock that's a small solution. tryed to do an small script but it's not working.. this is what I do vi lock.sh #!/bin/bash xscreensaver-command -lock exit 0 chmod +x lock.sh but this doesnt work.. ideas?

    Read the article

  • /var/lib/dpkg/lock.....help!

    - by Pycnopodia
    I had to reinstall the entire OS a little while ago and I have been trying to reinstall all of the programs I had before but I got a bit a of a problem now. I was trying to download dropbox from synaptic but it cannot finish the process and as a result I cannot update anything anymore. The line that comes out is: E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? I have tried: sudo apt-get install -f sudo apt-get -f install sudo rm /var/lib/dpkg/lock sudo apt-get -f update sudo dpkg --clear-selections sudo dpkg --configure -a But nothing seems to work. So is there a way to solve this?? Thanks

    Read the article

  • Figuring out the resource a lock in SQL Server 2000 affects

    - by Michael Lang
    I am adding a simple web-interface to show data from a commercial off the shelf (COTS) application. This COTS issues locks on any record the user is actively looking at (whether they intend to edit and update it or not). I have found sp_lock and the Microsoft sp_lock2 scripts and can see the locks, so that's all well and good. However, I cannot figure out how I can tell if a specific record I am about to update has been affected by one of these locks. If I submit the update request and there is in fact a lock, the web-interface will wait indefinitely until the user closes the window in the COTS. How can I either: a) determine before issuing an update that the record has been locked OR b) issue an update that will immediately return with a LOCKED status rather than indefinitely waiting on the COTS user to close their window on that record?

    Read the article

  • Lock-Free Data Structures in C++ Compare and Swap Routine

    - by slf
    In this paper: Lock-Free Data Structures (pdf) the following "Compare and Swap" fundamental is shown: template <class T> bool CAS(T* addr, T exp, T val) { if (*addr == exp) { *addr = val; return true; } return false; } And then says The entire procedure is atomic But how is that so? Is it not possible that some other actor could change the value of addr between the if and the assignment? In which case, assuming all code is using this CAS fundamental, it would be found the next time something "expected" it to be a certain way, and it wasn't. However, that doesn't change the fact that it could happen, in which case, is it still atomic? What about the other actor returning true, even when it's changes were overwritten by this actor? If that can't possibly happen, then why? I want to believe the author, so what am I missing here? I am thinking it must be obvious. My apologies in advance if this seems trivial.

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • I had a power outage. Now MySQL's lock file won't go away. What do you suggest?

    - by jasonspiro
    I do freelance IT consulting for various clients, both in Toronto, Canada, and worldwide. A client recently experienced a power failure. Now they've been having various problems with a Slackware 12.0.0 machine which also acts as a DNS server. One problem is that they can't log into phpMyAdmin. I tried stopping and restarting MySQL. But even when MySQL is stopped, the lock file stays around. jasonspiro@cybertron:~$ sudo /etc/init.d/mysql stop Shutting down MySQL. SUCCESS! jasonspiro@cybertron:~$ sudo /etc/init.d/mysql stop ERROR! MySQL manager or server PID file could not be found! jasonspiro@cybertron:~$ sudo /etc/init.d/mysql status ERROR! MySQL is not running, but lock exists jasonspiro@cybertron:~$ ls -l /var/lock/subsys/mysql -rw-r--r-- 1 root root 0 2012-07-05 16:18 /var/lock/subsys/mysql Why is MySQL's lock file hanging around despite the fact that MySQL isn't running? Can I simply stop MySQL, delete the lock file, and start MySQL again? Are there any other steps that I should take next, or nothing?

    Read the article

  • How can I lock my Mac when I walk away?

    - by schnapple
    This has got to be an easy, trivial question but as a new Mac user, how can I lock my Mac when I walk away? On Windows this is dead simple - Win+L. Or hit Ctrl-Alt-Del and select "Lock this Computer" The best thing I've found for the Mac is to rig the screensaver to require password on wake, set a hot corner to fire off the screen saver, and do that as I leave. Which feels really "Windows 3.1" to me. Is there a Win+L-style method to quickly lock my Mac when I walk away?

    Read the article

  • mysql way to make a lock in a php page

    - by Cris
    Hello, i have the following mysql table: myTable: id int auto_increment voucher int not null id_user int null I've populated voucher field with values from 1 to 100000 so i've got 100000 records; when a user clicks a button in a PHP page, i need to allocate a record for the user so i make something similar like: update myTable set id_user=XXX where voucher=(SELECT * FROM (SELECT MIN(voucher) FROM myTable WHERE id_user is null) v); The problem is that I don't use locks and i should use them because if two users click in the same moment i risk to assign the same voucher to different persons (2 updates in the same record so i lose 1 user) ... I think there must be a correct way to do this, can you help me please? Thanks ! cris

    Read the article

  • C# Web Service gets stuck waiting for lock, does not return

    - by blue
    We have a C#(2.0) application which talks to our server(in java) via web services. Lately we have started seeing following behavior in (ONLY)one of our lab machines(XP): Once in a while(every few days), one of the webservice request will just get stuck, will not return or timeout. Following is the stacktrace where it seem to be stuck. Have no clue what is going on here. Any pointer would be of great help. ESP EIP 05eceeec 7c90eb94 [GCFrame: 05eceeec] 05ecefbc 7c90eb94 [HelperMethodFrame_1OBJ: 05ecefbc] System.Threading.Monitor.Enter(System.Object) 05ecf014 7a5b0034 System.Net.ConnectionGroup.Disassociate(System.Net.Connection) 05ecf040 7a5aeaa7 System.Net.Connection.PrepareCloseConnectionSocket(System.Net.ConnectionReturnResult ByRef) 05ecf0a4 7a5ac0e1 System.Net.Connection.ReadStartNextRequest(System.Net.WebRequest, System.Net.ConnectionReturnResult ByRef) 05ecf0e8 7a5b1119 System.Net.ConnectStream.CallDone(System.Net.ConnectionReturnResult) 05ecf0fc 7a5b3b5a System.Net.ConnectStream.ReadChunkedSync(Byte[], Int32, Int32) 05ecf114 7a5b2b90 System.Net.ConnectStream.ReadWithoutValidation(Byte[], Int32, Int32, Boolean) 05ecf160 7a5b29cc System.Net.ConnectStream.Read(Byte[], Int32, Int32) 05ecf1a0 79473cab System.IO.StreamReader.ReadBuffer(Char[], Int32, Int32, Boolean ByRef) 05ecf1c4 79473bd6 System.IO.StreamReader.Read(Char[], Int32, Int32) 05ecf1e8 69c29119 System.Xml.XmlTextReaderImpl.ReadData() 05ecf1f8 69c2ad70 System.Xml.XmlTextReaderImpl.ParseDocumentContent() 05ecf20c 69c292d7 System.Xml.XmlTextReaderImpl.Read() 05ecf21c 69c2929d System.Xml.XmlTextReader.Read() 05ecf220 6991b3e7 System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(System.Web.Services.Protocols.SoapClientMessage, System.Net.WebResponse, System.IO.Stream, Boolean) 05ecf268 69919ed1 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(System.String, System.Object[])

    Read the article

  • lock-free memory reclamation with 64bit pointers

    - by JDonner
    Herlihy and Shavit's book (The Art of Multiprocessor Programming) solution to memory reclamation uses Java's AtomicStampedReference<T>;. To write one in C++ for the x86_64 I imagine requires at least a 12 byte swap operation - 8 for a 64bit pointer and 4 for the int. Is there x86 hardware support for this and if not, any pointers on how to do wait-free memory reclamation without it?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >