Search Results

Search found 908 results on 37 pages for 'optimistic concurrency'.

Page 16/37 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • How can CopyOnWriteArrayList be thread-safe?

    - by Shooshpanchick
    I've taken a look into OpenJDK's sources of CopyOnWriteArrayList and it seems that all write operations are protected by the same lock and read operations are not protected at all. As I understand, under JMM all accesses to a variable (both read and write) should be protected by lock or reordering effects may occur. For example, set(int, E) method contains these lines (under lock): /* 1 */ int len = elements.length; /* 2 */ Object[] newElements = Arrays.copyOf(elements, len); /* 3 */ newElements[index] = element; /* 4 */ setArray(newElements); The get(int) method, on the other hand, only does return get(getArray(), index);. In my understanding of JMM, this means that get may observe the array in an inconsistent state if statements 1-4 are reordered like 1-2(new)-4-2(copyOf)-3. Do I understand JMM incorrectly or is there any other explanations on why CopyOnWriteArrayList is thread-safe?

    Read the article

  • Physical Cores vs Virtual Cores in Parallelism

    - by Code Curiosity
    When it comes to virtualization, I have been deliberating on the relationship between the physical cores and the virtual cores, especially in how it effects applications employing parallelism. For example, in a VM scenario, if there are less physical cores than there are virtual cores, if that's possible, what's the effect or limits placed on the application's parallel processing? I'm asking, because in my environment, it's not disclosed as to what the physical architecture is. Is there still much advantage to parallelizing if the application lives on a dual core VM hosted on a single core physical machine?

    Read the article

  • Java Download Concurrent Data

    - by xger86x
    Hi, i'm developing an app which download map tiles around different places in a city. To do this, i have one thread for each place in which i select the tiles and create a thread to download each. Well, the question is how to avoid creating a thread for a tile that already exists in the thread pool. Should not just check if the file exists, since it is possible that the thread for that tile already exists (other place already need that tile) but the file has not been created- Any idea? Thanks

    Read the article

  • Do the 'up to date' guarantees provided by final field in Java's memory model extend to indirect ref

    - by mattbh
    The Java language spec defines semantics of final fields in section 17.5: The usage model for final fields is a simple one. Set the final fields for an object in that object's constructor. Do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. My question is - does the 'up-to-date' guarantee extend to the contents of nested arrays, and nested objects? An example scenario: Thread A constructs a HashMap of ArrayLists, then assigns the HashMap to final field 'myFinal' in an instance of class 'MyClass' Thread B sees a (non-synchronized) reference to the MyClass instance and reads 'myFinal', and accesses and reads the contents of one of the ArrayLists In this scenario, are the members of the ArrayList as seen by Thread B guaranteed to be at least as up to date as they were when MyClass's constructor completed?

    Read the article

  • What is the effect of final variable declaration in methods?

    - by Finbarr
    Classic example of a simple server: class ThreadPerTaskSocketServer { public static void main(String[] args) throws IOException { ServerSocket socket = new ServerSocket(80); while (true) { final Socket connection = socket.accept(); Runnable task = new Runnable() { public void run() { handleRequest(connection); } }; new Thread(task).start(); } } } Why should the Socket be declared as final? Is it because the new Thread that handles the request could refer back to the socket variable in the method and cause some sort of ConcurrentModificationException?

    Read the article

  • Will this make the object thread-safe?

    - by sharptooth
    I have a native Visual C++ COM object and I need to make it completely thread-safe to be able to legally mark it as "free-threaded" in th system registry. Specifically I need to make sure that no more than one thread ever accesses any member variable of the object simultaneously. The catch is I'm almost sure that no sane consumer of my COM object will ever try to simultaneously use the object from more than one thread. So I want the solution as simple as possible as long as it meets the requirement above. Here's what I came up with. I add a mutex or critical section as a member variable of the object. Every COM-exposed method will acquire the mutex/section at the beginning and release before returning control. I understand that this solution doesn't provide fine-grained access and this might slow execution down, but since I suppose simultaneous access will not really occur I don't care of this. Will this solution suffice? Is there a simpler solution?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • Is this technically thread safe despite being mutable?

    - by Finbarr
    Yes, the private member variable bar should be final right? But actually, in this instance, it is an atomic operation to simply read the value of an int. So is this technically thread safe? class foo { private int bar; public foo(int bar) { this.bar = bar; } public int getBar() { return bar; } } // assume infinite number of threads repeatedly calling getBar on the same instance of foo.

    Read the article

  • ArrayBlockingQueue exceeds given capacity

    - by Wojciech Reszelewski
    I've written program solving bounded producer & consumer problem. While constructing ArrayBlockingQueue I defined capacity 100. I'm using methods take and put inside threads. And I've noticed that sometimes I see put 102 times with any take's between them. Why does it happen? Producer run method: public void run() { Object e = new Object(); while(true) { try { queue.put(e); } catch (InterruptedException w) { System.out.println("Oj, nie wyszlo, nie bij"); } System.out.println("Element added"); } } Consumer run method: public void run() { while(true) { try { queue.take(); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Element removed"); } } Part of uniq -c on file with output: 102 Element removed 102 Element added 102 Element removed 102 Element added 102 Element removed 102 Element added 102 Element removed 102 Element added 102 Element removed 102 Element added 102 Element removed 102 Element added 2 Element removed 2 Element added 102 Element removed 102 Element added

    Read the article

  • Find messages from certain key till certain key while being able to remove stale keys.

    - by Alfred
    My problem Let's say I add messages to some sort of datastructure: 1. "dude" 2. "where" 3. "is" 4. "my" 5. "car" Asking for messages from index[4,5] should return: "my","car". Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say at time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain: 4. "my" 5. "car" My solution? I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?

    Read the article

  • Why wait should always be in synchronized block

    - by diy
    Hi gents, We all know that in order to invoke Object.wait() , this call must be placed in synchronized block,otherwise,IllegalMonitorStateException is thrown.But what's the reason for making this restriction?I know that wait() releases the monitor, but why do we need to explicitly acquire the monitor by making particular block synchronized and then release the monitor by calling wait() ? What is the potential damage if it was possible to invoke wait() outside synch block, retaining it's semantics - suspending the caller thread ? Thanks in advance

    Read the article

  • re: adding more threads to forkjoinpool

    - by paintcan
    Word up y'all I recently successfully experimented with Scala futures, got future { my shiznit } all over da place. I'm pleased as punch w/ the gains I'm seeing from the parallelism and whatnot, but I'm only seeing 4 worker threads. Wanna see some more. I've been looking all over for how I can crank up the number of threads to 11, but no luck. Help me out doods

    Read the article

  • Spring.Net how does WebApplicationContext.GetObject handle concurrent requests?

    - by Alfamale
    Apologies if I have missed something obvious here but having gone through the documentation, forums and googled for a number of hours, I just can't find a definitive answer to the following questions: How does the WebApplicationContext.GetObject() method handle concurrent requests? Are the requests serialized or executed in parallel? Is there any performance data available to demonstrate how it behaves under load? Thanks in advance for your help, Andrew

    Read the article

  • FileInputStream and FileOutputStream to the same file: Is a read() guaranteed to see all write()s that "happened before"?

    - by user946850
    I am using a file as a cache for big data. One thread writes to it sequentially, another thread reads it sequentially. Can I be sure that all data that has been written (by write()) in one thread can be read() from another thread, assuming a proper "happens-before" relationship in terms of the Java memory model? Is this behavior documented? EDIT: In my JDK, FileOutputSream does not override flush(), and OutputStream.flush() is empty. That's why I'm wondering... EDIT^2: The streams in question are owned exclusively by a class that I have full control of. Each stream is guaranteed to be accesses by one thread only. My tests show that it works as expected, but I'm still wondering if this is guaranteed and documented. See also this related discussion: http://chat.stackoverflow.com/rooms/17598/discussion-between-hussain-al-mutawa-and-user946850

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • Best way to reuse a Runnable

    - by Gandalf
    I have a class that implements Runnable and am currently using an Executor as my thread pool to run tasks (indexing documents into Lucene). executor.execute(new LuceneDocIndexer(doc, writer)); My issue is that my Runnable class creates many Lucene Field objects and I would rather reuse them then create new ones every call. What's the best way to reuse these objects (Field objects are not thread safe so I cannot simple make them static) - should I create my own ThreadFactory? I notice that after a while the program starts to degrade drastically and the only thing I can think of is it's GC overhead. I am currently trying to profile the project to be sure this is even an issue - but for now lets just assume it is.

    Read the article

  • Multithreaded java cache for objects that are heavy to create ?

    - by krosenvold
    I need a cache some objects with fairly heavy creation times, and I need exactly-once creation semantics. It should be possible to create objects for different CacheKeys concurrently. I think I need something that (under the hood) does something like this: ConcurrentHashMap<CacheKey, Future<HeavyObject>> Are there any existing open-source implementations of this that I can re-use ?

    Read the article

  • How do I ensure data consistency in this concurrent situation?

    - by MalcomTucker
    The problem is this: I have multiple competing threads (100+) that need to access one database table Each thread will pass a String name - where that name exists in the table, the database should return the id for the row, where the name doesn't already exist, the name should be inserted and the id returned. There can only ever be one instance of name in the database - ie. name must be unique How do I ensure that thread one doesn't insert name1 at the same time as thread two also tries to insert name1? In other words, how do I guarantee the uniqueness of name in a concurrent environment? This also needs to be as efficient as possible - this has the potential to be a serious bottleneck. I am using MySQL and Java. Thanks

    Read the article

  • Putting a thread to sleep until event X occurs

    - by tipu
    I'm writing to many files in a threaded app and I'm creating one handler per file. I have HandlerFactory class that manages the distribution of these handlers. What I'd like to do is that thread A requests and gets foo.txt's file handle from the HandlerFactory class thread B requests foo.txt's file handler handler class recognizes that this file handle has been checked out handler class puts thread A to sleep thread B closes file handle using a wrapper method from HandlerFactory HandlerFactory notifies sleeping threads thread B wakes and successfully gets foo.txt's file handle This is what I have so far, def get_handler(self, file_path, type): self.lock.acquire() if file_path not in self.handlers: self.handlers[file_path] = open(file_path, type) elif not self.handlers[file_path].closed: time.sleep(1) self.lock.release() return self.handlers[file_path][type] I believe this covers the sleeping and handler retrieval successfully, but I am unsure how to wake up all threads, or even better wake up a specific thread.

    Read the article

  • Synchronize write to two collections

    - by glaz666
    I need to put some value to maps if it is not there yet. The key-value (if set) should always be in two collections (that is put should happen in two maps atomically). I have tried to implement this as follows: private final ConcurrentMap<String, Object> map1 = new ConcurrentHashMap<String, Object>(); private final ConcurrentMap<String, Object> map2 = new ConcurrentHashMap<String, Object>(); public Object putIfAbsent(String key) { Object retval = map1.get(key); if (retval == null) { synchronized (map1) { retval = map1.get(key); if (retval == null) { Object value = new Object(); //or get it somewhere synchronized (map2) { map1.put(key, value); map2.put(key, new Object()); } retval = value; } } } return retval; } public void doSomething(String key) { Object obj1 = map1.get(key); Object obj2 = map2.get(key); //do smth } Will that work fine in all cases? Thanks

    Read the article

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >