Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 39/66 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Is this BlockingQueue susceptible to deadlock?

    - by unforgiven3
    I've been using this code as a queue that blocks on Dequeue() until an element is enqueued. I've used this code for a few years now in several projects, all with no issues... until now. I'm seeing a deadlock in some code I'm writing now, and in investigating the problem, my 'eye of suspicion' has settled on this BlockingQueue<T>. I can't prove it, so I figured I'd ask some people smarter than me to review it for potential issues. Can you guys see anything that might cause a deadlock in this code? public class BlockingQueue<T> { private readonly Queue<T> _queue; private readonly ManualResetEvent _event; /// <summary> /// Constructor /// </summary> public BlockingQueue() { _queue = new Queue<T>(); _event = new ManualResetEvent(false); } /// <summary> /// Read-only property to get the size of the queue /// </summary> public int Size { get { int count; lock (_queue) { count = _queue.Count; } return count; } } /// <summary> /// Enqueues element on the queue /// </summary> /// <param name="element">Element to enqueue</param> public void Enqueue(T element) { lock (_queue) { _queue.Enqueue(element); _event.Set(); } } /// <summary> /// Dequeues an element from the queue /// </summary> /// <returns>Dequeued element</returns> public T Dequeue() { T element; while (true) { if (Size == 0) { _event.Reset(); _event.WaitOne(); } lock (_queue) { if (_queue.Count == 0) continue; element = _queue.Dequeue(); break; } } return element; } /// <summary> /// Clears the queue /// </summary> public void Clear() { lock (_queue) { _queue.Clear(); } } }

    Read the article

  • Which async call use for DB connection and still responsive GUI?--

    - by Jade
    Hi, My application connects to MSSQL but sometimes it takes a while and the GUI is getting frozen. I would like to do the connection on the other thread, I guess BeginInvoke would be the best way (I know about background worker but I would like to learn this). I have studied MSDN page but I did not understand what is the best way to use? They also say that you can use only callback when the thread that called the async.method does not need to know the results...I dont understand it as I believe I can set some variable in the other thread to "pass" the result well. I would just need the GUI to be not frozen while the connection is being established. Thank you for your advice.

    Read the article

  • How does one implement a truly asynchronous java thread

    - by Ritesh M Nayak
    I have a function that needs to perfom two operations, one which finishes fast and one which takes a long time to run. I want to be able to delegate the long running operation to a thread and I dont care when the thread finishes, but the threads needs to complete. I implemented this as shown below , but, my secondoperation never gets done as the function exits after the start() call. How I can ensure that the function returns but the second operation thread finishes its execution as well and is not dependent on the parent thread ? public void someFunction(String data) { smallOperation() Blah a = new Blah(); Thread th = new Thread(a); th.Start(); } class SecondOperation implements Runnable { public void run(){ // doSomething long running } }

    Read the article

  • Can I avoid a threaded UDP socket in Python dropping data?

    - by 666craig
    First off, I'm new to Python and learning on the job, so be gentle! I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads. My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there. Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-) import socket import threading import Queue import numpy import gtk gtk.gdk.threads_init() import gtk.glade import pygtk class readFromUDPSocket(threading.Thread): def __init__(self, socketUDP, readDataQueue, packetSize, numScans): threading.Thread.__init__(self) self.socketUDP = socketUDP self.readDataQueue = readDataQueue self.packetSize = packetSize self.numScans = numScans def run(self): for scan in range(1, self.numScans + 1): buffer = self.socketUDP.recv(self.packetSize) self.readDataQueue.put(buffer) self.socketUDP.close() print 'myServer finished!' class displayWithGTK(threading.Thread): def __init__(self, displayDataQueue, image, viewArea): threading.Thread.__init__(self) self.displayDataQueue = displayDataQueue self.image = image self.viewWidth = viewArea[0] self.viewHeight = viewArea[1] self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16) def run(self): scan = 0 try: while True: if not scan % self.viewWidth: scan = 0 buffer = self.displayDataQueue.get(timeout=0.1) self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16) gtk.gdk.threads_enter() self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB, False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3) self.image.set_from_pixbuf(self.myPixbuf) self.image.show() gtk.gdk.threads_leave() scan += 1 except Queue.Empty: print 'myDisplay finished!' pass def quitGUI(obj): print 'Currently active threads: %s' % threading.enumerate() gtk.main_quit() if __name__ == '__main__': # Create socket (IPv4 protocol, datagram (UDP)) and bind to address socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '192.168.1.5' port = 1024 socketUDP.bind((host, port)) # Data parameters samplesPerScan = 256 packetsPerSecond = 1200 packetSize = 512 duration = 1 # For now, set a fixed duration to log data numScans = int(packetsPerSecond * duration) # Create array to store data data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16) # Create queue for displaying from readDataQueue = Queue.Queue(numScans) # Build GUI from Glade XML file builder = gtk.Builder() builder.add_from_file('GroundVue.glade') window = builder.get_object('mainwindow') window.connect('destroy', quitGUI) view = builder.get_object('viewport') image = gtk.Image() view.add(image) viewArea = (1200, samplesPerScan) # Instantiate & start threads myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans) myDisplay = displayWithGTK(readDataQueue, image, viewArea) myServer.start() myDisplay.start() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print 'gtk.main finished!'

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • How to handle all unhandled exceptions when using Task Parallel Library?

    - by Buu Nguyen
    I'm using the TPL (Task Parallel Library) in .NET 4.0. I want to be able to centralize the handling logic of all unhandled exceptions by using the Thread.GetDomain().UnhandledException event. However, in my application, the event is never fired for threads started with TPL code, e.g. Task.Factory.StartNew(...). The event is indeed fired if I use something like new Thread(threadStart).Start(). This MSDN article suggests to use Task#Wait() to catch the AggregateException when working with TPL, but that is not I want because it is not "centralized" enough a mechanism. Does anyone experience same problem at all or is it just me? Do you have any solution for this?

    Read the article

  • WinForm-style Invoke() in unmanaged C++

    - by Matt Green
    I've been playing with a DataBus-type design for a hobby project, and I ran into an issue. Back-end components need to notify the UI that something has happened. My implementation of the bus delivers the messages synchronously with respect to the sender. In other words, when you call Send(), the method blocks until all the handlers have called. (This allows callers to use stack memory management for event objects.) However, consider the case where an event handler updates the GUI in response to an event. If the handler is called, and the message sender lives on another thread, then the handler cannot update the GUI due to Win32's GUI elements having thread affinity. More dynamic platforms such as .NET allow you to handle this by calling a special Invoke() method to move the method call (and the arguments) to the UI thread. I'm guessing they use the .NET parking window or the like for these sorts of things. A morbid curiosity was born: can we do this in C++, even if we limit the scope of the problem? Can we make it nicer than existing solutions? I know Qt does something similar with the moveToThread() function. By nicer, I'll mention that I'm specifically trying to avoid code of the following form: if(! this->IsUIThread()) { Invoke(MainWindowPresenter::OnTracksAdded, e); return; } being at the top of every UI method. This dance was common in WinForms when dealing with this issue. I think this sort of concern should be isolated from the domain-specific code and a wrapper object made to deal with it. My implementation consists of: DeferredFunction - functor that stores the target method in a FastDelegate, and deep copies the single event argument. This is the object that is sent across thread boundaries. UIEventHandler - responsible for dispatching a single event from the bus. When the Execute() method is called, it checks the thread ID. If it does not match the UI thread ID (set at construction time), a DeferredFunction is allocated on the heap with the instance, method, and event argument. A pointer to it is sent to the UI thread via PostThreadMessage(). Finally, a hook function for the thread's message pump is used to call the DeferredFunction and de-allocate it. Alternatively, I can use a message loop filter, since my UI framework (WTL) supports them. Ultimately, is this a good idea? The whole message hooking thing makes me leery. The intent is certainly noble, but are there are any pitfalls I should know about? Or is there an easier way to do this?

    Read the article

  • Proper way to have an endless worker thread?

    - by Neil N
    I have an object that requires a lot of initialization (1-2 seconds on a beefy machine). Though once it is initialized it only takes about 20 miliseconds to do a typical "job" In order to prevent it from being re-initialized every time an app wants to use it (which could be 50 times a second or not at all for minutes in typical usage), I decided to give it a job que, and have it run on its own thread, checking to see if there is any work for it in the que. However I'm not entirely sure how to make a thread that runs indefinetly with or without work. Here's what I have so far, any critique is welcomed private void DoWork() { while (true) { if (JobQue.Count > 0) { // do work on JobQue.Pop() } else { System.Threading.Thread.Sleep(50); } } } After thought: I was thinking I may need to kill this thread gracefully insead of letting it run forever, so I think I will add a Job type that tells the thread to end. Any thoughts on how to end a thread like this also appreciated.

    Read the article

  • Boost::Thread or fork()

    - by osmano807
    I'm testing boost::thread on a system. It happens that I needed to act as a fork(), because one thread modifies the other variables, even member variables of class I do the project using fork() or is there some alternative still using boost::thread Basically I run this program in Linux and maybe FreeBSD

    Read the article

  • Reading ResultSet from multiple threads

    - by superdario
    Hello, In the database, I have a definition table that is read from the application once upon starting. This definition table rarely changes, so it makes sense to read it once and restart the application every time it changes. However, after the table is read (put into a ResultSet), it will be read by multiple handlers running in their own threads. How do you suggest to accomplish this? My idea was to populate a CachedRowSet, and then create a copy of this set (through the createCopy() method) for each handler every time a new request comes. Do you think this is wise? Does this offer a good performance? Thanks.

    Read the article

  • multi-thread in MS Access, async processing

    - by LanguaFlash
    I know that title sounds crazy but here is my situation. After a certain user event I need to update a couple tables that are "unrelated" to what the user is currently doing. Currently this takes a couple seconds to execute and causes the user a certain amount of frustration. Is there a way to perform my update in a second process or in a manner that doesn't "freeze" the UI of my app while it is processing? Thanks

    Read the article

  • Python : How to close a UDP socket while is waiting for data in recv ?

    - by alexroat
    Hello, let's consider this code in python: import socket import threading import sys import select class UDPServer: def __init__(self): self.s=None self.t=None def start(self,port=8888): if not self.s: self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.s.bind(("",port)) self.t=threading.Thread(target=self.run) self.t.start() def stop(self): if self.s: self.s.close() self.t.join() self.t=None def run(self): while True: try: #receive data data,addr=self.s.recvfrom(1024) self.onPacket(addr,data) except: break self.s=None def onPacket(self,addr,data): print addr,data us=UDPServer() while True: sys.stdout.write("UDP server> ") cmd=sys.stdin.readline() if cmd=="start\n": print "starting server..." us.start(8888) print "done" elif cmd=="stop\n": print "stopping server..." us.stop() print "done" elif cmd=="quit\n": print "Quitting ..." us.stop() break; print "bye bye" It runs an interactive shell with which I can start and stop an UDP server. The server is implemented through a class which launches a thread in which there's a infinite loop of recv/*onPacket* callback inside a try/except block which should detect the error and the exits from the loop. What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor. Instead, it seems that recvfrom still to block the thread waiting for data even after the close call. Why this strange behavior ? I've always used this patter to implements an UDP server in C++ and JAVA and it always worked. I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too. I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6. Thanks.

    Read the article

  • Catching the redirected address from NSURLConnection

    - by Vic
    I'm working on a software which follows the HTTP redirection which is dynamically calculated by the server depending on a pparameter. I don't want to show the primary server in Mobile Safari but rather the redirected address only. The following code workks: request = [NSMutableURLRequest requestWithURL:originalUrl cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; // Extract the redirected URL target = [response URL]; The problem is that the server requires several seconds to answer. The sendSynchronousRequest blocks the app for this time completely which is messy, I can't even display the "Busy" animation. Does anyone know how I can retrieve the redirected address asynchronously without safari appearance in the meanwhile with the redirecting server URL or display some sort of the "Be patient" animation during the sendSynchronousRequest? What disadvantages would have the passing of sendSynchronousRequest in another thread?

    Read the article

  • Capturing stdout from an imported module in wxpython and sending it to a textctrl, without blocking the GUI

    - by splafe
    There are alot of very similar questions to this but I can't find one that applies specifically to what I'm trying to do. I have a simulation (written in SimPy) that I'm writing a GUI for, the main output of the simulation is text - to the console from 'print' statements. Now, I thought the simplest way would be to create a seperate module GUI.py, and import my simulation program into it: import osi_model I want all the print statements to be captured by the GUI and appear inside a Textctrl, which there's countless examples of on here, along these lines: class MyFrame(wx.Frame): def __init__(self, *args, **kwds): <general frame initialisation stuff..> redir=RedirectText(self.txtCtrl_1) sys.stdout=redir class RedirectText: def __init__(self,aWxTextCtrl): self.out=aWxTextCtrl def write(self,string): self.out.WriteText(string) I am also starting my simulation from a 'Go' button: def go_btn_click(self, event): print 'GO' self.RT = threading.Thread(target=osi_model.RunThis()) self.RT.start() This all works fine, and the output from the simulation module is captured by the TextCtrl, except the GUI locks up and becomes unresponsive - I still need it to be accessible (at the very minimum to have a 'Stop' button). I'm not sure if this is a botched attempt at creating a new thread that I've done here, but I assume a new thread will be needed at some stage in this process. People suggest using wx.CallAfter, but I'm not sure how to go about this considering the imported module doesn't know about wx, and also I can't realistically go through the entire simulation architecture and change all the print statements to wx.CallAfter, and any attempt to capture the shell from inside the imported simulation program leads to the program crashing. Does anybody have any ideas about how I can best achieve this? So all I really need is for all console text to be captured by a TextCtrl while the GUI remains responsive, and all text is solely coming from an imported module. (Also, secondary question regarding a Stop button - is it bad form to just kill the simulation thread?). Thanks, Duncan

    Read the article

  • Wait until user press enter in textbox in another form and return value

    - by ekapek
    Hello, I am new to C# and I'm trying to do sth like this: myList = list of 1000+ string values; 1.StartNewThreads(50); //50 is the numbers of new threads 2.DoSth1(next value from myList); 3.DoSth2(); 4. var value = { ShowNewImageForm(); //show only if not another ImageForm is displayed if another is show - wait WaitUntilUserPressEnterInTextBox(); ReturnValueFormTextbox(); } 5.DoSth3(); 6.StartNewThread(); For now I have: foreach(String s in myList ) { DoSth1(s); DoSth2(); DoSth3(); } And now I'm looking for ideas to points 1,3,6 Can You suggest me how to resolve this? How to start 50 threads How to get value from textbox in another form when user press enter

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • How to improve multi-threaded access to Cache (custom implementation)

    - by Andy
    I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm. Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function: // a single instance is created to handle all TCacheable<T> elements public class Cache() { private object syncQueue = new object(); private void topQueue(TCacheable<T> el) { lock (syncQueue) if (newest != el) { if (el.elder != null) el.elder.newer = el.newer; if (el.newer != null) el.newer.elder = el.elder; if (oldest == el) oldest = el.newer; if (oldest == null) oldest = el; if (newest != null) newest.newer = el; el.newer = null; el.elder = newest; newest = el; } } } The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time. Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

    Read the article

  • stop thread that does not get interrupted

    - by prmatta
    I have a thread that sits and reads objects off of an ObjectInputStream: public void run() { try { ois = new ObjectInputStream(clientSocket.getInputStream()); Object o; while ((o = ois.readObject()) != null) { //do something with object } } catch (Exception ex) { //Log exception } } readObject does not throw InterruptedException and as far as I can tell, no exception is thrown when this thread is interrupted. How do I stop this thread?

    Read the article

  • c++11 atomic ordering: extended total order memory_order_seq_cst for locks

    - by itaj
    There's this note in c++11 29.3-p3: [ Note: Although it is not explicitly required that S include locks, it can always be extended to an order that does include lock and unlock operations, since the ordering between those is already included in the "happens before" ordering. - end note ] What does it mean by "always"? I can understand that any certain implementation can be designed to support such an extended S. But in some general implementation that wasn't designed for it, I don't see that S can be extended so. I had sent this question to comp.std.c++ but got no answers there. http://groups.google.com/group/comp.std.c++/browse_frm/thread/5242fa70d0594d1b#

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

  • how to write silverlight threading function in another file or project

    - by Piyush
    I am using three tier architecture.I have SilverlightUI and UIController two projects.SilverlightUI contains only UI pages and controls while UIController contains all proxies of WCF services. Now I have created threads to update my controls dynamically and to do processing parallel.AS the requirement I want to define all functionality of threads in UIController projects.What should I do? Currenty what I am doing - private void Button_Click(object sender, RoutedEventArgs e) { StartThreads(); } private void StartThreads() { private Thread _thread1; _thread1 = new Thread(DoThread1); _thread1.Start(); } public static void DoThread1() { _data1.Dispatcher.BeginInvoke(delegate() { _data1.Text = _count1.ToString(); }); System.Threading.Thread.Sleep(1000); } I Want to write DoThread1() method in UIController project and call that function from here button_click()

    Read the article

  • Can getAttribute() method of Tomcat ServletContext implementation be called without synchronization?

    - by oo_olo_oo
    I would like to read some parameters during servlet initializtion (in init() method), and store them among servlet context attributes (using getServletContext().setAttribute()). I would like to read these parameters later - during some request processing (using getServletContext().getAttribute()). So, the multiple threads could do this simultaneously. My question is if such an attempt is safe? Could I be sure that multi threaded calls to the getAttribute() don't mess up any internal state of the servlet context? Please take into account that I'm not going to call the setAttribute() anywhere besides the initialization. So, only calls to the getAttribute() are going to be done from multiple threads. But depending on the internal implementation, this also could be dangerous. So, any information about Tomcat's implementation would be appreciated.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >