Search Results

Search found 3641 results on 146 pages for 'threads'.

Page 35/146 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How to get a stable, snappy UI using threads?

    - by Thomas Ahle
    I recently watching this video on Google Chrome with great interest. It explains that Google Chrome uses one thread for IO, one for opening files and one for intermodule communication. I think I may be able to use something similar for my own - currently quite messy - application. I wondered if there were any good articles on best-practices or patterns for such threaded divisions of tasks?

    Read the article

  • Issues with time slicing

    - by user12331
    I was trying to see the effect of time slicing. And how it can consume significant amount of time. Actually, I was trying to divide a certain work into number of threads and see the effect. I have a two core processor. So two threads can run in parallel. I was trying to see if I have a work w that is done by 2 threads, and if I have the same work done by t threads with each thread doing w/t of the work. How much does time slicing play a role in it As time slicing is time consuming process, I was expecting that when I do the same work using a two thread process or by a t thread process, the amount of time taken by the t thread process will be more Any suggestions?

    Read the article

  • C++: static function member shared between threads, can block all?

    - by mhambra
    Hi all, I have a class, which has static function defined to work with C-style extern C { static void callback(foo bar) { } }. // static is defined in header. Three objects (each in separate pthread) are instantiated from this class, each of them has own loop (in class constructor), which can receive the callback. The pointer to function is passed as: x = init_function(h, queue_id, &callback, NULL); while(1) { loop_function(x); } So each thread has the same pointer to &callback. Callback function can block for minutes. Each thread object, excluding the one which got the blocking callback, can call callback again. If the callback function exists only once, then any thread attempting to callback will also block. This would give me an undesired bug, circa is interesting to ask: can anything in C++ become acting this way? Maybe, due to extern { } or some pointer usage?

    Read the article

  • Scaling a ruby script by launching multiple processes instead of using threads.

    - by Zombies
    I want to increase the throughput of a script which does net I/O (a scraper). Instead of making it multithreaded in ruby (I use the default 1.9.1 interpreter), I want to launch multiple processes. So, is there a system for doing this to where I can track when one finishes to re-launch it again so that I have X number running at any time. ALso some will run with different command args. I was thinking of writing a bash script but it sounds like a potentially bad idea if there already exists a method for doing something like this on linux.

    Read the article

  • How do I read and write to a file using threads in java?

    - by WarmWaffles
    I'm writing an application where I need to read blocks in from a single file, each block is roughly 512 bytes. I am also needing to write blocks simultaneously. One of the ideas I had was BlockReader implements Runnable and BlockWriter implements Runnable and BlockManager manages both the reader and writer. The problem that I am seeing with most examples that I have found was locking problems and potential deadlock situations. Any ideas how to implement this?

    Read the article

  • Reading from a very large table using multiple threads (Java ) and writing them to a single file

    - by user2534926
    I am currently facing a situation where i have a table with almost 80 millions data and i have to take a dump of that table and store it into a csv file. Currently i am using a not so professional approach( with a perl script+DBI interface , printing the values to stdout and redirecting to a csv file). Now i am planning to use java threading approach. Can you suggest a way forward. Thanks in advance

    Read the article

  • Large number of simultaneous long-running operations in Qt

    - by Hostile Fork
    I have some long-running operations that number in the hundreds. At the moment they are each on their own thread. My main goal in using threads is not to speed these operations up. The more important thing in this case is that they appear to run simultaneously. I'm aware of cooperative multitasking and fibers. However, I'm trying to avoid anything that would require touching the code in the operations, e.g. peppering them with things like yieldToScheduler(). I also don't want to prescribe that these routines be stylized to be coded to emit queues of bite-sized task items...I want to treat them as black boxes. For the moment I can live with these downsides: Maximum # of threads tend to be O(1000) Cost per thread is O(1MB) To address the bad cache performance due to context-switches, I did have the idea of a timer which would juggle the priorities such that only idealThreadCount() threads were ever at Normal priority, with all the rest set to Idle. This would let me widen the timeslices, which would mean fewer context switches and still be okay for my purposes. Question #1: Is that a good idea at all? One certain downside is it won't work on Linux (docs say no QThread::setPriority() there). Question #2: Any other ideas or approaches? Is QtConcurrent thinking about this scenario? (Some related reading: how-many-threads-does-it-take-to-make-them-a-bad-choice, many-threads-or-as-few-threads-as-possible, maximum-number-of-threads-per-process-in-linux)

    Read the article

  • Make Python Socket Server More Efficient

    - by BenMills
    I have very little experience working with sockets and multithreaded programming so to learn more I decided to see if I could hack together a little python socket server to power a chat room. I ended up getting it working pretty well but then I noticed my server's CPU usage spiked up over 100% when I had it running in the background. Here is my code in full: http://gist.github.com/332132 I know this is a pretty open ended question so besides just helping with my code are there any good articles I could read that could help me learn more about this? My full code: import select import socket import sys import threading from daemon import Daemon class Server: def __init__(self): self.host = '' self.port = 9998 self.backlog = 5 self.size = 1024 self.server = None self.threads = [] self.send_count = 0 def open_socket(self): try: self.server = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) self.server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) self.server.bind((self.host,self.port)) self.server.listen(5) print "Server Started..." except socket.error, (value,message): if self.server: self.server.close() print "Could not open socket: " + message sys.exit(1) def remove_thread(self, t): t.join() def send_to_children(self, msg): self.send_count = 0 for t in self.threads: t.send_msg(msg) print 'Sent to '+str(self.send_count)+" of "+str(len(self.threads)) def run(self): self.open_socket() input = [self.server,sys.stdin] running = 1 while running: inputready,outputready,exceptready = select.select(input,[],[]) for s in inputready: if s == self.server: # handle the server socket c = Client(self.server.accept(), self) c.start() self.threads.append(c) print "Num of clients: "+str(len(self.threads)) self.server.close() for c in self.threads: c.join() class Client(threading.Thread): def __init__(self,(client,address), server): threading.Thread.__init__(self) self.client = client self.address = address self.size = 1024 self.server = server self.running = True def send_msg(self, msg): if self.running: self.client.send(msg) self.server.send_count += 1 def run(self): while self.running: data = self.client.recv(self.size) if data: print data self.server.send_to_children(data) else: self.running = False self.server.threads.remove(self) self.client.close() """ Run Server """ class DaemonServer(Daemon): def run(self): s = Server() s.run() if __name__ == "__main__": d = DaemonServer('/var/servers/fserver.pid') if len(sys.argv) == 2: if 'start' == sys.argv[1]: d.start() elif 'stop' == sys.argv[1]: d.stop() elif 'restart' == sys.argv[1]: d.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2)

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • Concurrent Programming:Should I write a sequential program first, then add thread safety?

    - by evthim
    I'm working on a project where we have to create a number of threads(actual number will be inputted in by testers (TA's)). I'm having trouble not only with the programming but also with the design, I can't wrap my head around all of the threads that will be invoked and where I might cause errors. The project is due soon so I don't want to waste time on this if it'll actually set me back, but I was wondering if I should write the program like only one thread will be running and everything should be sequential and then later go back and try to add the thread safety parts of the code? Would that take twice the original amount of time? Project Description: Note:I'm going to be as vague as possible so I don't violate any honor codes, sorry :( your program should accept n number of objectA threads, m number of objectB threads, and r number of objectC objectB threads interact with code in objectA. objectA threads interact with code in objectB and objectC objectB and objectC don't directly interact, but do so indirectly through objectA -ex: objectB needs something from objectA. objectA gets the result for that something by calling objectC my confusion stems mostly from the fact that all of this interactions will be done by m+n threads and there are various restrictions throughout the descriptions, like objectB can request something from objectA, and objectA has to wait for objectC to finish that something before returning it to objectB. Also each objectA thread can only work on one instruction from objectB at a time, etc. etc. I just want to know if I write the code so that there is only 1 objectA, 1 objectB and 1 object C, can I go back and easily modify it so that those 1's can be changed to m, n and r? Sorry again, if my description is a little bit confusing.

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • Can one thread open a socket and other thread close it?

    - by Pkp
    I have some kernel threads in Linux kernel, inside my KLM. I have a server thread, that listens to the channel, Once it sees there is an incoming connection, it creates an accept socket, accepts the connection and spawns a child thread. It also passes the accepted socket to the child kernel thread as the (void *) argument. The code is working fine. I had a design question. Suppose now the threads have to be terminated, main and the child threads, what would be the best way to close the accept socket. I can see two ways, 1] The main thread waits for all the child threads to exit, each of the child threads close the accept sockets while exiting, the last child thread passes a signal to the main thread for it to exit . Here even though the main thread was the one that created the accept socket, the child threads close that socket, and they do this before the main thread exits. So is this acceptable? Any problems you guys forsee here? 2] Second is the main thread closes all the accept sockets it created before it exits. But there may be a possibility(corner case) that the main thread gets an exception and will have to close, so if it closes the accept sockets before exiting, the child threads using that socket will be in danger. Hence i am using the first case i mentioned.Let me know what you guys think?

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • gdb : multithreading

    - by Arpit
    Hi Wish to know that I have a program which uses two threads I have put the break point in both the threads. While running the program under gdb I want to switch between the threads and make them run. (thread t1 is active and running and thread t2 hold on the breakpoint I want to stop T1 running and run the T2) Is there any way that I can schedule the threads in gdb. Thanks Arpit

    Read the article

  • Fast inter-process (inter-threaded) communications IPC on large multi-cpu system.

    - by IPC
    What would be the fastest portable bi-directional communication mechanism for inter-process communication where threads from one application need to communicate to multiple threads in another application on the same computer, and the communicating threads can be on different physical CPUs). I assume that it would involve a shared memory and a circular buffer and shared synchronization mechanisms. But shared mutexes are very expensive (and there are limited number of them too) to synchronize when threads are running on different physical CPUs.

    Read the article

  • PHP/Java bridge problem

    - by Jack
    I am using tomcat 6 on windows. Here is the code I am testing. import java.io.ByteArrayOutputStream; import java.io.Closeable; import java.io.StringReader; import javax.script.Invocable; import javax.script.ScriptEngine; import javax.script.ScriptEngineManager; /** * Create and run THREAD_COUNT PHP threads, concurrently accessing a * shared resource. * * Create 5 script engines, passing each a shared resource allocated * from Java. Each script engine has to implement Runnable. * * Java accesses the Runnable script engine using * scriptEngine.getInterface() and calls thread.start() to invoke each * PHP Runnable implementations concurrently. */ class PhpThreads { public static final String runnable = new String("<?php\n" + "function run() {\n" + " $out = java_context()->getAttribute('sharedResource', 100);\n" + " $nr = (string)java_context()->getAttribute('nr', 100);\n" + " echo \"started thread: $nr\n\";\n" + " for($i=0; $i<100; $i++) {\n" + " $out->write(ord($nr));\n" + " java('java.lang.Thread')->sleep(1);\n" + " }\n" + "}\n" + "?>\n"); static final int THREAD_COUNT = 5; public static void main(String[] args) throws Exception { ScriptEngineManager manager = new ScriptEngineManager(); Thread threads[] = new Thread[THREAD_COUNT]; ScriptEngine engines[] = new ScriptEngine[THREAD_COUNT]; ByteArrayOutputStream sharedResource = new ByteArrayOutputStream(); StringReader runnableReader = new StringReader(runnable); // create THREAD_COUNT PHP threads for (int i=0; i<THREAD_COUNT; i++) { engines[i] = manager.getEngineByName("php-invocable"); if (engines[i] == null) throw new NullPointerException ("php script engine not found"); engines[i].put("nr", new Integer(i+1)); engines[i].put("sharedResource", sharedResource); engines[i].eval(runnableReader); runnableReader.reset(); // cast the whole script to Runnable; note also getInterface(specificClosure, type) Runnable r = (Runnable) ((Invocable)engines[i]).getInterface(Runnable.class); threads[i] = new Thread(r); } // run the THREAD_COUNT PHP threads for (int i=0; i<THREAD_COUNT; i++) { threads[i].start(); } // wait for the THREAD_COUNT PHP threads to finish for (int i=0; i<THREAD_COUNT; i++) { threads[i].join(); ((Closeable)engines[i]).close(); } // print the output generated by the THREAD_COUNT concurrent threads String result = sharedResource.toString(); System.out.println(result); // Check result Object res=manager.getEngineByName("php").eval( "<?php " + "exit((int)('10011002100310041005'!=" + "@system(\"echo -n "+result+"|sed 's/./&\\\n/g'|sort|uniq -c|tr -d ' \\\n'\")));" + "?>"); System.exit(((Number)res).intValue()); } } I have added all the libraries. When I run the file I get the following error - run: Exception in thread "main" javax.script.ScriptException: java.io.IOException: Cannot run program "php-cgi": CreateProcess error=2, The system cannot find the file specified at php.java.script.InvocablePhpScriptEngine.eval(InvocablePhpScriptEngine.java:209) at php.java.script.SimplePhpScriptEngine.eval(SimplePhpScriptEngine.java:178) at javax.script.AbstractScriptEngine.eval(AbstractScriptEngine.java:232) at PhpThreads.main(NewClass.java:53) Caused by: java.io.IOException: Cannot run program "php-cgi": CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at php.java.bridge.Util$Process.start(Util.java:1064) at php.java.bridge.Util$ProcessWithErrorHandler.start(Util.java:1166) at php.java.bridge.Util$ProcessWithErrorHandler.start(Util.java:1217) at php.java.script.CGIRunner.doRun(CGIRunner.java:126) at php.java.script.HttpProxy.doRun(HttpProxy.java:63) at php.java.script.CGIRunner.run(CGIRunner.java:111) at php.java.bridge.ThreadPool$Delegate.run(ThreadPool.java:60) Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 8 more What am I missing?

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Thread vs ThreadPool - .Net 2.0

    - by NLV
    Hello I'm not able to understand the difference between Thread vs ThreadPool. Consider i've to manipulate 50,000 records using threads. In case of threads i need to either predefine no of threads or no of records per threads. Either of them has to be constant. In case of threadpool we dont need to set any of them theoretically. But practically we need to assign the number of records per thread, because the no of threads may grow extremely large if the input no of records is huge. Any insights on this?

    Read the article

  • Java threadpool functionality

    - by cpf
    Hi stackoverflow, I need to make a program with a limited amount of threads (currently using newFixedThreadPool) but I have the problem that all threads get created from start, filling up memory at alarming rate. I wish to prevent this. Threads should only be created shortly before they are executed. e.g.: I call the program and instruct it to use 2 threads in the pool. The program should create & launch the first 2 Threads immediately (obviously), create the next 2 to wait for the previous 2, and at that point wait until one or both of the first 2 ended executing. I thought about extending executor or FixedThreadPool or such. However I have no clue on how to start there and doubt it is the best solution. Easiest would have my main Thread sleeping on intervals, which is not really good either... Thanks in advance!

    Read the article

  • How do I use SDl_Threads properly?

    - by Anoymonous
    I am new to threads,SDL and how graphic work in general. I've been looking through all of LazyFoo's SDL tutorials, and had helped me greatly. But in his tutorials about multi threading, he commented that you should never use video functions in separate threads, or might cause problem. I am curious how it should be done, as I still have a vague understanding of graphics and threads. As one of my projects is a shoot'em up, I was wondering if I should create one thread that displays all the graphics, one threads receives all the player input for his ship, and another thread for the enemy AI. If this is NOT how it should be done, (I think it's wrong) does anyone have any advice of how graphics should be implemented with user input and enemy AI with threads? For the Lazyfoo's tutorials, this is the link: http://lazyfoo.net/SDL_tutorials/

    Read the article

  • Can you dynamically resize a java.util.concurrent.ThreadPoolExecutor while it still has tasks waitin

    - by Edward Shtern
    I'm working with a java.util.concurrent.ThreadPoolExecutor to process a number of items in parallel. Although the threading itself works fine, at times we've run into other resource constraints due to actions happening in the threads, which made us want to dial down the number of Threads in the pool. I'd like to know if there's a way to dial down the number of the threads while the threads area actually working. I know that you can call setMaximumPoolSize() and/or setCorePoolSize(), but these only resize the pool once threads become idle, but they don't become idle until there are no tasks waiting in the queue.

    Read the article

  • Avoding multiple thread spawns in pthreads

    - by madman
    Hi StackOverflow, I have an application that is parallellized using pthreads. The application has a iterative routine call and a thread spawn within the rountine (pthread_create and pthread_join) to parallelize the computation intensive section in the routine. When I use an instrumenting tool like PIN to collect the statistics the tool reports statistics for several threads(no of threads x no of iterations). I beleive it is because it is spawning new set of threads each time the routine is called. How can I ensure that I create the thread only once and all successive calls use the threads that have been created first. When I do the same with OpenMP and then try to collect the statistics, I see that the threads are created only once. Is it beacause of the OpenMP runtime ? Thanks.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >