Search Results

Search found 882 results on 36 pages for 'concurrency coordination'.

Page 13/36 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to force programs out of swap file when a resources-intensive batch finishes?

    - by sharptooth
    We use employees' desktops for CPU-intensive simulation during the night. Desktops run Windows - usually Windows XP. Employees don't log off, they just lock the desktops, switch off their monitors and go. Every employee has a configuration file which he can edit to specify when he is most likely out of office. When that time comes a background program grabs data for simulation from the server, spawns worker processes, watches them, gets results and sends them to the server. When the time specified by the employee elapses simulation stops so that normal desktop usage is not interfered. The problem is that simuation consumes a lot of memory, so when the worker processes run they force other programs into the swap file. So when the employee comes all the programs he left are luggish and slow until he opens them one by one so that they are unswapped. Is there a way the program can force other programs out of swap file when it stops simulation so that they again run smoothly?

    Read the article

  • For single-producer, single-consumer should I use a BlockingCollection or a ConcurrentQueue?

    - by Jonathan Allen
    For single-producer, single-consumer should I use a BlockingCollection or a ConcurrentQueue? Concerns: * My goal is to pull up to 100 items at a time and send them as a batch to the next step. * If I use a ConcurrentQueue, I have to manually cause it to go asleep when there is no work to be done. Otherwise I waste CPU cycles on spinning. * If I use a BlockingQueue and I only have 99 work items, it could indefinitely block until there the 100th item arrives. http://msdn.microsoft.com/en-us/library/system.collections.concurrent.aspx

    Read the article

  • Django: How can I protect against concurrent modification of data base entries

    - by Ber
    If there a way to protect against concurrent modifications of the same data base entry by two or more users? It would be acceptable to show an error message to the user performing the second commit/save operation, but data should not be silently overwritten. I think locking the entry is not an option, as a user might use the "Back" button or simply close his browser, leaving the lock for ever.

    Read the article

  • What is the JVM Scheduling algorithm ?

    - by IHawk
    Hello ! I am really curious about how does the JVM work with threads ! In my searches in internet, I found some material about RTSJ, but I don't know if it's the right directions for my answers. I also found this topic in sun's forums, http://forums.sun.com/thread.jspa?forumID=513&threadID=472453, but that's not satisfatory. Can someone give me some directions, material, articles or suggestion about the JVM scheduling algorithm ? I am also looking for information about the default configurations of Java threads in the scheduler, like 'how long does it take for every thread' in case of time-slicing. And this stuff. I would appreciate any help ! Thank you !

    Read the article

  • While using ConcurrentQueue, trying to dequeue while looping through in parallel

    - by James Black
    I am using the parallel data structures in my .NET 4 application and I have a ConcurrentQueue that gets added to while I am processing through it. I want to do something like: personqueue.AsParallel().WithDegreeOfParallelism(20).ForAll(i => ... ); as I make database calls to save the data, so I am limiting the number of concurrent threads. But, I expect that the ForAll isn't going to dequeue, and I am concerned about just doing ForAll(i => { personqueue.personqueue.TryDequeue(...); ... }); as there is no guarantee that I am popping off the correct one. So, how can I iterate through the collection and dequeue, in a parallel fashion. Or, would it be better to use PLINQ to do this processing, in parallel?

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • When does the call() method get called in a Java Executor using Callable objects?

    - by MalcomTucker
    This is some sample code from an example. What I need to know is when call() gets called on the callable? What triggers it? public class CallableExample { public static class WordLengthCallable implements Callable { private String word; public WordLengthCallable(String word) { this.word = word; } public Integer call() { return Integer.valueOf(word.length()); } } public static void main(String args[]) throws Exception { ExecutorService pool = Executors.newFixedThreadPool(3); Set<Future<Integer>> set = new HashSet<Future<Integer>>(); for (String word: args) { Callable<Integer> callable = new WordLengthCallable(word); Future<Integer> future = pool.submit(callable); //**DOES THIS CALL call()?** set.add(future); } int sum = 0; for (Future<Integer> future : set) { sum += future.get();//**OR DOES THIS CALL call()?** } System.out.printf("The sum of lengths is %s%n", sum); System.exit(sum); } }

    Read the article

  • Strange problem with simple multithreading program in Java

    - by Elizabeth
    Hello, I am just starting play with multithreading programming. I would like to my program show alternately character '-' and '+' but it doesn't. My task is to use synchronized keyword. As far I have: class FunnyStringGenerator{ private char c; public FunnyStringGenerator(){ c = '-'; } public synchronized char next(){ if(c == '-'){ c = '+'; } else{ c = '-'; } return c; } } class ThreadToGenerateStr implements Runnable{ FunnyStringGenerator gen; public ThreadToGenerateStr(FunnyStringGenerator fsg){ gen = fsg; } @Override public void run() { for(int i = 0; i < 10; i++){ System.out.print(gen.next()); } } } public class Main{ public static void main(String[] args) throws IOException { FunnyStringGenerator FSG = new FunnyStringGenerator(); ExecutorService exec = Executors.newCachedThreadPool(); for(int i = 0; i < 20; i++){ exec.execute(new ThreadToGenerateStr(FSG)); } } } EDIT: I also testing Thread.sleep in run method instead for loop.

    Read the article

  • Using shared_ptr to implement RCU (read-copy-update)?

    - by yongsun
    I'm very interested in the user-space RCU (read-copy-update), and trying to simulate one via tr1::shared_ptr, here is the code, while I'm really a newbie in concurrent programming, would some experts help me to review? The basic idea is, reader calls get_reading_copy() to gain the pointer of current protected data (let's say it's generation one, or G1). writer calls get_updating_copy() to gain a copy of the G1 (let's say it's G2), and only one writer is allowed to enter the critical section. After the updating is done, writer calls update() to do a swap, and make the m_data_ptr pointing to data G2. The ongoing readers and the writer now hold the shared_ptr of G1, and either a reader or a writer will eventually deallocate the G1 data. Any new readers would get the pointer to G2, and a new writer would get the copy of G2 (let's say G3). It's possible the G1 is not released yet, so multiple generations of data my co-exists. template <typename T> class rcu_protected { public: typedef T type; typedef std::tr1::shared_ptr<type> rcu_pointer; rcu_protected() : m_data_ptr (new type()) {} rcu_pointer get_reading_copy () { spin_until_eq (m_is_swapping, 0); return m_data_ptr; } rcu_pointer get_updating_copy () { spin_until_eq (m_is_swapping, 0); while (!CAS (m_is_writing, 0, 1)) {/* do sleep for back-off when exceeding maximum retry times */} rcu_pointer new_data_ptr(new type(*m_data_ptr)); // as spin_until_eq does not have memory barrier protection, // we need to place a read barrier to protect the loading of // new_data_ptr not to be re-ordered before its construction _ReadBarrier(); return new_data_ptr; } void update (rcu_pointer new_data_ptr) { while (!CAS (m_is_swapping, 0, 1)) {} m_data_ptr.swap (new_data_ptr); // as spin_until_eq does not have memory barrier protection, // we need to place a write barrier to protect the assignments of // m_is_writing/m_is_swapping be re-ordered bofore the swapping _WriteBarrier(); m_is_writing = 0; m_is_swapping = 0; } private: volatile long m_is_writing; volatile long m_is_swapping; rcu_pointer m_data_ptr; };

    Read the article

  • Postgres : Post statement (or insert) asynchronous, non-blocking processing.

    - by Hassan Syed
    I'm wondering if it is possible, that after a collection of rows is inserted, to initiate an operation that is executed asynchronously, is non-blocking, and doesn't need to inform the originator of the request - of the result. I am working with large amounts of events and I can guarantee that the post-insert logic will not fail -- I just want to have a single insert thread in my event-sources, and I want this thread to keep flying without blocking, and without being responsible for any post-delivery book-keeping. I can tell you that I would potentially have a 100 of these jobs executing concurrently and each job might operate on 5 tables with anywhere between 200-1000 inserts on each of these tables. A hint in the right direction should be enough.

    Read the article

  • iPhone: One Object, One Thread

    - by GingerBreadMane
    On the iPhone, I would like to do some operations on an image in a separate thread. Rather than dealing with semiphores, locking, etc., I'd like to use the 'One Object, One Thread' method of safely writing this concurrent operation. I'm not sure what is the correct way to copy my object into a new thread so that the object is not accessed in the main thread. Do I use the 'copy' method? If so, do I do this before the thread or inside the thread? ... -(void)someMethod{ UIImage *myImage; [NSThread detachNewThreadSelector:@selector(getRotatedImage:) toTarget:self withObject:myImage]; } -(void)getRotatedImage:(UIImage *)image{ ... ... UIImage *copiedImage = [image copy]; ... ... }

    Read the article

  • How to maintain a pool of names ?

    - by Jacques René Mesrine
    I need to maintain a list of userids (proxy accounts) which will be dished out to multithreaded clients. Basically the clients will use the userids to perform actions; but for this question, it is not important what these actions are. When a client gets hold of a userid, it is not available to other clients until the action is completed. I'm trying to think of a concurrent data structure to maintain this pool of userids. Any ideas ? Would a ConcurrentQueue do the job ? Clients will dequeue a userid, and add back the userid when they are finished with it.

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Linq ChangeConflictException occurs when submitting DataContext changes

    - by Alex
    System.Data.Linq.ChangeConflictException: 2 of X updates failed. at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at PROJECT.Controllers.HomeController.ClickProc(Int32 id, String code, String n) This is what I get very often. This action is done thousands of times a day, and I get this exception about once every 5 seconds. From what I understand it happens when something changes in the database in the period between creating DataContext and updating it. Am I right? How can I fix it? Update I just debugged the error and found the following: Table name: dbo.Stats current value: 9852039 original value: 9852038 database value: 9852039 The Stats table is updated constantly. So how can I still make LINQ save the changes. With "classical" SQL Server access through SqlDataCommand I never had problems like that.

    Read the article

  • Actors in Scala.net

    - by weijiajun
    I have recently completed some study of erlang, and was intrigued by scala for its feature set and the ease of interpolating with java (and possibly .net) applications. I am finally studying actors and was wondering if there is an actor mechanism that currently works in .net. I have looked at the libararies that come down with sbaz and have found that there is a scala.Concurrent but no scala.actors.Actor. I tried to use the scala.Concurrent.Channel but was unable to use the ! to send messages. I was just wondering if this is something that is currently available and if so how do you go about setting it up.

    Read the article

  • Preventing concurrent data modification in Web based app

    - by Manoj
    Hi, I am building a web app in Silverlight which allows users to view and edit a database. In order to prevent multiple users from editing the same data, I was thinking of implementing a lock and key mechanism, so that other users are made to wait when one particular user is editing the data. Is there any way in which we can have variable(flag specifying if a user is editing data) in the server which can be shared across multiple clients? Is there a better way to manage this type of conurrent data access issues?

    Read the article

  • How to parallelize this groovy code?

    - by lucas
    I'm trying to write a reusable component in Groovy to easily shoot off emails from some of our Java applications. I would like to pass it a List, where Email is just a POJO(POGO?) with some email info. I'd like it to be multithreaded, at least running all the email logic in a second thread, or make one thread per email. I am really foggy on multithreading in Java so that probably doesn't help! I've attempted a few different ways, but here is what I have right now: void sendEmails(List<Email> emails) { def threads = [] def sendEm = emails.each{ email -> def th = new Thread({ Random rand = new Random() def wait = (long)(rand.nextDouble() * 1000) println "in closure" this.sleep wait sendEmail(email) }) println "putting thread in list" threads << th } threads.each { it.run() } threads.each { it.join() } } I was hoping the sleep would randomly slow some threads down so the console output wouldn't be sequential. Instead, I see this: putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list in closure sending email1 in closure sending email2 in closure sending email3 in closure sending email4 in closure sending email5 in closure sending email6 in closure sending email7 in closure sending email8 in closure sending email9 in closure sending email10 sendEmail basically does what you'd expect, including the println statement, and the client that calls this follows, void doSomething() { Mailman emailer = MailmanFactory.getExchangeEmailer() def to = ["one","two"] def from = "noreply" def li = [] def email (1..10).each { email = new Email(to,null,from,"email"+it,"hello") li << email } emailer.sendEmails li }

    Read the article

  • AtomicInteger lazySet and set

    - by Yan Cheng CHEOK
    May I know what is the difference among lazySet and set method for AtomicInteger. javadoc doesn't talk much about lazySet : Eventually sets to the given value. It seems that AtomicInteger will not immediately be set to the desired value, but it will be scheduled to be set in some time. But, what is the practical use of this method? Any example?

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >