Search Results

Search found 813 results on 33 pages for 'concurrency'.

Page 12/33 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Is there an implementation of rapid concurrent syntactical sugar in scala? eg. map-reduce

    - by TiansHUo
    Passing messages around with actors is great. But I would like to have even easier code. Examples (Pseudo-code) val splicedList:List[List[Int]]=biglist.partition(100) val sum:Int=ActorPool.numberOfActors(5).getAllResults(splicedList,foldLeft(_+_)) where spliceIntoParts turns one big list into 100 small lists the numberofactors part, creates a pool which uses 5 actors and receives new jobs after a job is finished and getallresults uses a method on a list. all this done with messages passing in the background. where maybe getFirstResult, calculates the first result, and stops all other threads (like cracking a password)

    Read the article

  • volatile keyword seems to be useless?

    - by Finbarr
    import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; public class Main implements Runnable { private final CountDownLatch cdl1 = new CountDownLatch(NUM_THREADS); private volatile int bar = 0; private AtomicInteger count = new AtomicInteger(0); private static final int NUM_THREADS = 25; public static void main(String[] args) { Main main = new Main(); for(int i = 0; i < NUM_THREADS; i++) new Thread(main).start(); } public void run() { int i = count.incrementAndGet(); cdl1.countDown(); try { cdl1.await(); } catch (InterruptedException e1) { e1.printStackTrace(); } bar = i; if(bar != i) System.out.println("Bar not equal to i"); else System.out.println("Bar equal to i"); } } Each Thread enters the run method and acquires a unique, thread confined, int variable i by getting a value from the AtomicInteger called count. Each Thread then awaits the CountDownLatch called cdl1 (when the last Thread reaches the latch, all Threads are released). When the latch is released each thread attempts to assign their confined i value to the shared, volatile, int called bar. I would expect every Thread except one to print out "Bar not equal to i", but every Thread prints "Bar equal to i". Eh, wtf does volatile actually do if not this?

    Read the article

  • Concurency issues with scheduling app

    - by Sazug
    Our application needs a simple scheduling mechanism - we can schedule only one visit per room for the same time interval (but one visit can be using one or more rooms). Using SQL Server 2005, sample procedure could look like this: CREATE PROCEDURE CreateVisit @start datetime, @end datetime, @roomID int AS BEGIN DECLARE @isFreeRoom INT BEGIN TRANSACTION SELECT @isFreeRoom = COUNT(*) FROM visits V INNER JOIN visits_rooms VR on VR.VisitID = V.ID WHERE @start = start AND @end = [end] AND VR.RoomID = @roomID IF (@isFreeRoom = 0) BEGIN INSERT INTO visits (start, [end]) VALUES (@start, @end) INSERT INTO visits_rooms (visitID, roomID) VALUES (SCOPE_IDENTITY(), @roomID) END COMMIT TRANSACTION END In order to not have the same room scheduled for two visits at the same time, how should we handle this problem in procedure? Should we use SERIALIZABLE transaction isolation level or maybe use table hints (locks)? Which one is better?

    Read the article

  • How this pthread actually works?

    - by user289013
    I am actually on my project on compiler with SMP, and want to code with pthreads and heard about many parallel things open mpi and so on, So to start with how this thread is allocated to core while calling pthread,Is there any way to give threads to different cores by pthreads?

    Read the article

  • Java - when to use notify or notifyAll?

    - by mdma
    Why does java.lang.Object have two notify methods - notify and notifyAll? It seems that notifyAll does at least everything notify does, so why not just use notifyAll all the time? If notifyAll is used instead of notify, is the program still correct, and vice versa? What influences the choice between these two methods?

    Read the article

  • Can you dynamically resize a java.util.concurrent.ThreadPoolExecutor while it still has tasks waitin

    - by Edward Shtern
    I'm working with a java.util.concurrent.ThreadPoolExecutor to process a number of items in parallel. Although the threading itself works fine, at times we've run into other resource constraints due to actions happening in the threads, which made us want to dial down the number of Threads in the pool. I'd like to know if there's a way to dial down the number of the threads while the threads area actually working. I know that you can call setMaximumPoolSize() and/or setCorePoolSize(), but these only resize the pool once threads become idle, but they don't become idle until there are no tasks waiting in the queue.

    Read the article

  • Creating futures using Apple's GCD

    - by jer
    I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such: Communication happens between actors using messages Multicast communication only (one actor to many actors) Senders and receivers are decoupled from one another using a blackboard where messages are pushed to. Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard. I'm trying to implement futures in the language right now, so I've created a new type which holds some information: A group of its own The value being 'returned' However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work. The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message. When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical. I'm wondering if anyone can think of a way around this?

    Read the article

  • For single-producer, single-consumer should I use a BlockingCollection or a ConcurrentQueue?

    - by Jonathan Allen
    For single-producer, single-consumer should I use a BlockingCollection or a ConcurrentQueue? Concerns: * My goal is to pull up to 100 items at a time and send them as a batch to the next step. * If I use a ConcurrentQueue, I have to manually cause it to go asleep when there is no work to be done. Otherwise I waste CPU cycles on spinning. * If I use a BlockingQueue and I only have 99 work items, it could indefinitely block until there the 100th item arrives. http://msdn.microsoft.com/en-us/library/system.collections.concurrent.aspx

    Read the article

  • Correct way to generate order numbers in SQL Server

    - by Anton Gogolev
    This question certainly applies to a much broader scope, but here it is. I have a basic ecommerce app, where users can, naturally enough, place orders. Said orders need to have a unique number, which I'm trying to generate right now. Each order is Vendor-specific. Basically, I have an OrderNumberInfo (VendorID, OrderNumber) table. Now whenever a customer places an order I need to increment OrderNumber for a particuar Vendor and return that value. Naturally, I don't want other processes to interfere with me, so I need to exclusively lock this row somehow: begin tranaction declare @n int select @n = OrderNumber from OrderNumberInfo where VendorID = @vendorID update OrderNumberInfo set OrderNumber = @n + 1 where OrderNumber = @n and VendorID = @vendorID commit transaction Now, I've read about select ... with (updlock rowlock), pessimistic locking, etc., but just cannot fit all this in a coherent picture: How do these hints play with SQL Server 2008s' snapshot isolation? Do they perform row-level, page-level or even table-level locks? How does this tolerate multiple users trying to generate numbers for a single Vendor? What isolation levels are appropriate here? And generally - what is the way to do such things?

    Read the article

  • Why do condition variables sometimes erroneously wake up?

    - by aspo
    I've known for eons that the way you use a condition variable is lock while not task_done wait on condition variable unlock Because sometimes condition variables will spontaneously wake. But I've never understood why that's the case. In the past I've read it's expensive to make a condition variable that doesn't have that behavior, but nothing more than that. So... why do you need to worry about falsely being woken up when waiting on a condition variable?

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • ArrayBlockingQueue - How to "interrupt" a thread that is wating on .take() method

    - by bernhard
    I use an ArrayBlockingQueue in my code. Clients will wait untill an element becomes available: myBlockingQueue.take(); How can I "shutdown" my service in case no elements are present in the queue and the take() ist wating indefenitely for an element to become available? This method throws an InterruptedException. My question is, how can I "evoke" an Interrupted Exception so that take() will quit? (I also tought about notify(), but it seems I doesnt help here..) I know I could insert an special "EOF/QUIT" marker Element but is this really the only solution? UPDATE (regarding the comment, that points to another question with two solutions: one mentioned above using a "Poisoning Pill Object" and the second one is Thread.interrupt(): The myBlockingQueue.take() is used NOT in a Thread (extending Thread) but rather implements Runnable. It seems a Runnable does not provide the .interrupt() method? How could I interrupt the Runnable? Million Thanks Bernhard

    Read the article

  • Loading an XML configuration file BEFORE the flex application loads

    - by Shahar
    Hi, We are using an XML file as an external configuration file for several parameters in our application (including default values for UI components and properties values of some service layer objects). The idea is to be able to load the XML configuration file before the flex application initializes any of its components. This is crucial because XML loading is processed a-synchronously in flex, which can potentially cause race-conditions in the application. For example: the configuration file holds the endpoint URL of a web service used to obtain data from the server. The URL resides in the XML because we want to allow our users to alter the endpoint URL according to their environment. Now because the endpoint URL is retrieved only after the XML has been completely loaded, some of the application's components might be invoking operations on this web service before it is initialized with the correct endpoint. The trivial solution would have been to suspend the initialization of the application until the complete event is dispatched by the loader. But it appears that this solution is far from being trivial. I haven't found a single solution that allows me to load the XML before any other object in the application. Can anyone advice or comment on this matter? Regards, Shahar

    Read the article

  • Core Data: Deleting causes 'NSObjectInaccessibleException' from NSOperation with a reference to a deleted object

    - by Bryan Irace
    My application has NSOperation subclasses that fetch and operate on managed objects. My application also periodically purges rows from the database, which can result in the following race condition: An background operation fetches a bunch of objects (from a thread-specific context). It will iterate over these objects and do something with their properties. A bunch of rows are deleted in the main managed object context. The background operation accesses a property on an object that was deleted from the main context. This results in an 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault' Ideally, the objects that are fetched by the NSOperation can be operated on even if one is deleted in the main context. The best way I can think to achieve this is either to: Call [request setReturnsObjectsAsFaults:NO] to ensure that Core Data won't try to fulfill a fault for an object that no longer exists in the main context. The problem here is I may need to access the object's relationships, which (to my understanding) will still be faulted. Iterate through the managed objects up front and copy the properties I will need into separate non-managed objects. The problem here is that (I think) I will need to synchronize/lock this part, in case an object is deleted in the main context before I can finish copying. Am I missing something obvious? It doesn't seem like what I'm trying to accomplish is too out of the ordinary. Thanks for your help.

    Read the article

  • How to write a spinlock without using CAS

    - by Martin
    Following on from a discussion which got going in the comments of this question. How would one go about writing a Spinlock without CAS operations? As the other question states: The memory ordering model is such that writes will be atomic (if two concurrent threads write a memory location at the same time, the result will be one or the other). The platform will not support atomic compare-and-set operations.

    Read the article

  • How to force programs out of swap file when a resources-intensive batch finishes?

    - by sharptooth
    We use employees' desktops for CPU-intensive simulation during the night. Desktops run Windows - usually Windows XP. Employees don't log off, they just lock the desktops, switch off their monitors and go. Every employee has a configuration file which he can edit to specify when he is most likely out of office. When that time comes a background program grabs data for simulation from the server, spawns worker processes, watches them, gets results and sends them to the server. When the time specified by the employee elapses simulation stops so that normal desktop usage is not interfered. The problem is that simuation consumes a lot of memory, so when the worker processes run they force other programs into the swap file. So when the employee comes all the programs he left are luggish and slow until he opens them one by one so that they are unswapped. Is there a way the program can force other programs out of swap file when it stops simulation so that they again run smoothly?

    Read the article

  • How to maintain a pool of names ?

    - by Jacques René Mesrine
    I need to maintain a list of userids (proxy accounts) which will be dished out to multithreaded clients. Basically the clients will use the userids to perform actions; but for this question, it is not important what these actions are. When a client gets hold of a userid, it is not available to other clients until the action is completed. I'm trying to think of a concurrent data structure to maintain this pool of userids. Any ideas ? Would a ConcurrentQueue do the job ? Clients will dequeue a userid, and add back the userid when they are finished with it.

    Read the article

  • Actors in Scala.net

    - by weijiajun
    I have recently completed some study of erlang, and was intrigued by scala for its feature set and the ease of interpolating with java (and possibly .net) applications. I am finally studying actors and was wondering if there is an actor mechanism that currently works in .net. I have looked at the libararies that come down with sbaz and have found that there is a scala.Concurrent but no scala.actors.Actor. I tried to use the scala.Concurrent.Channel but was unable to use the ! to send messages. I was just wondering if this is something that is currently available and if so how do you go about setting it up.

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • When does the call() method get called in a Java Executor using Callable objects?

    - by MalcomTucker
    This is some sample code from an example. What I need to know is when call() gets called on the callable? What triggers it? public class CallableExample { public static class WordLengthCallable implements Callable { private String word; public WordLengthCallable(String word) { this.word = word; } public Integer call() { return Integer.valueOf(word.length()); } } public static void main(String args[]) throws Exception { ExecutorService pool = Executors.newFixedThreadPool(3); Set<Future<Integer>> set = new HashSet<Future<Integer>>(); for (String word: args) { Callable<Integer> callable = new WordLengthCallable(word); Future<Integer> future = pool.submit(callable); //**DOES THIS CALL call()?** set.add(future); } int sum = 0; for (Future<Integer> future : set) { sum += future.get();//**OR DOES THIS CALL call()?** } System.out.printf("The sum of lengths is %s%n", sum); System.exit(sum); } }

    Read the article

  • What is the JVM Scheduling algorithm ?

    - by IHawk
    Hello ! I am really curious about how does the JVM work with threads ! In my searches in internet, I found some material about RTSJ, but I don't know if it's the right directions for my answers. I also found this topic in sun's forums, http://forums.sun.com/thread.jspa?forumID=513&threadID=472453, but that's not satisfatory. Can someone give me some directions, material, articles or suggestion about the JVM scheduling algorithm ? I am also looking for information about the default configurations of Java threads in the scheduler, like 'how long does it take for every thread' in case of time-slicing. And this stuff. I would appreciate any help ! Thank you !

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >