Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 39/66 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Issue accessing class variable from thread.

    - by James
    Hello, The code below is meant to take an arraylist of product objects as an input, spun thread for each product(and add the product to the arraylist 'products'), check product image(product.imageURL) availability, remove the products without images(remove the product from the arraylist 'products'), and return an arraylist of products with image available. package com.catgen.thread; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import com.catgen.Product; import com.catgen.Utils; public class ProductFilterThread extends Thread{ private Product product; private List<Product> products = new ArrayList<Product>(); public ProductFilterThread(){ } public ProductFilterThread(Product product){ this.product = product; } public synchronized void addProduct(Product product){ System.out.println("Before add: "+getProducts().size()); getProducts().add(product); System.out.println("After add: "+getProducts().size()); } public synchronized void removeProduct(Product product){ System.out.println("Before rem: "+getProducts().size()); getProducts().remove(product); System.out.println("After rem: "+getProducts().size()); } public synchronized List<Product> getProducts(){ return this.products; } public synchronized void setProducts(List<Product> products){ this.products = products; } public void run(){ boolean imageExists = Utils.fileExists(this.product.ImageURL); if(!imageExists){ System.out.println(this.product.ImageURL); removeProduct(this.product); } } public List<Product> getProductsWithImageOnly(List<Product> products){ ProductFilterThread pft = null; try{ List<ProductFilterThread> threads = new ArrayList<ProductFilterThread>(); for(Product product: products){ pft = new ProductFilterThread(product); addProduct(product); pft.start(); threads.add(pft); } Iterator<ProductFilterThread> threadsIter = threads.iterator(); while(threadsIter.hasNext()){ ProductFilterThread thread = threadsIter.next(); thread.join(); } }catch(Exception e){ e.printStackTrace(); } System.out.println("Total returned products = "+getProducts().size()); return getProducts(); } } Calling statement: displayProducts = new ProductFilterThread().getProductsWithImageOnly(displayProducts); Here, when addProduct(product) is called from within getProductsWithImageOnly(), getProducts() returns the list of products, but that's not the case(no products are returned) when the method removeProduct() is called by a thread, because of which the products without images are never removed. As a result, all the products are returned by the module whether or not the contained products have images. What can be the problem here? Thanks in advance. James.

    Read the article

  • Is this BlockingQueue susceptible to deadlock?

    - by unforgiven3
    I've been using this code as a queue that blocks on Dequeue() until an element is enqueued. I've used this code for a few years now in several projects, all with no issues... until now. I'm seeing a deadlock in some code I'm writing now, and in investigating the problem, my 'eye of suspicion' has settled on this BlockingQueue<T>. I can't prove it, so I figured I'd ask some people smarter than me to review it for potential issues. Can you guys see anything that might cause a deadlock in this code? public class BlockingQueue<T> { private readonly Queue<T> _queue; private readonly ManualResetEvent _event; /// <summary> /// Constructor /// </summary> public BlockingQueue() { _queue = new Queue<T>(); _event = new ManualResetEvent(false); } /// <summary> /// Read-only property to get the size of the queue /// </summary> public int Size { get { int count; lock (_queue) { count = _queue.Count; } return count; } } /// <summary> /// Enqueues element on the queue /// </summary> /// <param name="element">Element to enqueue</param> public void Enqueue(T element) { lock (_queue) { _queue.Enqueue(element); _event.Set(); } } /// <summary> /// Dequeues an element from the queue /// </summary> /// <returns>Dequeued element</returns> public T Dequeue() { T element; while (true) { if (Size == 0) { _event.Reset(); _event.WaitOne(); } lock (_queue) { if (_queue.Count == 0) continue; element = _queue.Dequeue(); break; } } return element; } /// <summary> /// Clears the queue /// </summary> public void Clear() { lock (_queue) { _queue.Clear(); } } }

    Read the article

  • multi-thread in MS Access, async processing

    - by LanguaFlash
    I know that title sounds crazy but here is my situation. After a certain user event I need to update a couple tables that are "unrelated" to what the user is currently doing. Currently this takes a couple seconds to execute and causes the user a certain amount of frustration. Is there a way to perform my update in a second process or in a manner that doesn't "freeze" the UI of my app while it is processing? Thanks

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • Can I avoid a threaded UDP socket in Python dropping data?

    - by 666craig
    First off, I'm new to Python and learning on the job, so be gentle! I'm trying to write a threaded Python app for Windows that reads data from a UDP socket (thread-1), writes it to file (thread-2), and displays the live data (thread-3) to a widget (gtk.Image using a gtk.gdk.pixbuf). I'm using queues for communicating data between threads. My problem is that if I start only threads 1 and 3 (so skip the file writing for now), it seems that I lose some data after the first few samples. After this drop it looks fine. Even by letting thread 1 complete before running thread 3, this apparent drop is still there. Apologies for the length of code snippet (I've removed the thread that writes to file), but I felt removing code would just prompt questions. Hope someone can shed some light :-) import socket import threading import Queue import numpy import gtk gtk.gdk.threads_init() import gtk.glade import pygtk class readFromUDPSocket(threading.Thread): def __init__(self, socketUDP, readDataQueue, packetSize, numScans): threading.Thread.__init__(self) self.socketUDP = socketUDP self.readDataQueue = readDataQueue self.packetSize = packetSize self.numScans = numScans def run(self): for scan in range(1, self.numScans + 1): buffer = self.socketUDP.recv(self.packetSize) self.readDataQueue.put(buffer) self.socketUDP.close() print 'myServer finished!' class displayWithGTK(threading.Thread): def __init__(self, displayDataQueue, image, viewArea): threading.Thread.__init__(self) self.displayDataQueue = displayDataQueue self.image = image self.viewWidth = viewArea[0] self.viewHeight = viewArea[1] self.displayData = numpy.zeros((self.viewHeight, self.viewWidth, 3), dtype=numpy.uint16) def run(self): scan = 0 try: while True: if not scan % self.viewWidth: scan = 0 buffer = self.displayDataQueue.get(timeout=0.1) self.displayData[:, scan, 0] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 1] = numpy.fromstring(buffer, dtype=numpy.uint16) self.displayData[:, scan, 2] = numpy.fromstring(buffer, dtype=numpy.uint16) gtk.gdk.threads_enter() self.myPixbuf = gtk.gdk.pixbuf_new_from_data(self.displayData.tostring(), gtk.gdk.COLORSPACE_RGB, False, 8, self.viewWidth, self.viewHeight, self.viewWidth * 3) self.image.set_from_pixbuf(self.myPixbuf) self.image.show() gtk.gdk.threads_leave() scan += 1 except Queue.Empty: print 'myDisplay finished!' pass def quitGUI(obj): print 'Currently active threads: %s' % threading.enumerate() gtk.main_quit() if __name__ == '__main__': # Create socket (IPv4 protocol, datagram (UDP)) and bind to address socketUDP = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) host = '192.168.1.5' port = 1024 socketUDP.bind((host, port)) # Data parameters samplesPerScan = 256 packetsPerSecond = 1200 packetSize = 512 duration = 1 # For now, set a fixed duration to log data numScans = int(packetsPerSecond * duration) # Create array to store data data = numpy.zeros((samplesPerScan, numScans), dtype=numpy.uint16) # Create queue for displaying from readDataQueue = Queue.Queue(numScans) # Build GUI from Glade XML file builder = gtk.Builder() builder.add_from_file('GroundVue.glade') window = builder.get_object('mainwindow') window.connect('destroy', quitGUI) view = builder.get_object('viewport') image = gtk.Image() view.add(image) viewArea = (1200, samplesPerScan) # Instantiate & start threads myServer = readFromUDPSocket(socketUDP, readDataQueue, packetSize, numScans) myDisplay = displayWithGTK(readDataQueue, image, viewArea) myServer.start() myDisplay.start() gtk.gdk.threads_enter() gtk.main() gtk.gdk.threads_leave() print 'gtk.main finished!'

    Read the article

  • .NET threading: how can I capture an abort on an unstarted thread?

    - by Groxx
    I have a chunk of threads I wish to run in order, on an ASP site running .NET 2.0 with Visual Studio 2008 (no idea how much all that matters, but there it is), and they may have aborted-clean-up code which should be run regardless of how far through their task they are. So I make a thread like this: Thread t = new Thread(delegate() { try { /* do things */ System.Diagnostics.Debug.WriteLine("try"); } catch (ThreadAbortException) { /* cleanup */ System.Diagnostics.Debug.WriteLine("catch"); } }); Now, if I wish to abort the set of threads part way through, the cleanup may still be desirable later on down the line. Looking through MSDN implies you can .Abort() a thread that has not started, and then .Start() it, at which point it will receive the exception and perform normally. Or you can .Join() the aborted thread to wait for it to finish aborting. Presumably you can combine them. http://msdn.microsoft.com/en-us/library/ty8d3wta(v=VS.80).aspx To wait until a thread has aborted, you can call the Join method on the thread after calling the Abort method, but there is no guarantee the wait will end. If Abort is called on a thread that has not been started, the thread will abort when Start is called. If Abort is called on a thread that is blocked or is sleeping, the thread is interrupted and then aborted. Now, when I debug and step through this code: t.Abort(); // ThreadState == Unstarted | AbortRequested t.Start(); // throws ThreadStartException: "Thread failed to start." // so I comment it out, and t.Join(); // throws ThreadStateException: "Thread has not been started." At no point do I see any output, nor do any breakpoints on either the try or catch block get reached. Oddly, ThreadStartException is not listed as a possible throw of .Start(), from here: http://msdn.microsoft.com/en-us/library/a9fyxz7d(v=VS.80).aspx (or any other version) I understand this could be avoided by having a start parameter, which states if the thread should jump to cleanup code, and foregoing the Abort call (which is probably what I'll do). And I could .Start() the thread, and then .Abort() it. But as an indeterminate amount of time may pass between .Start and .Abort, I'm considering it unreliable, and the documentation seems to say my original method should work. Am I missing something? Is the documentation wrong? edit: ow. And you can't call .Start(param) on a non-parameterized Thread(Start). Is there a way to find out if a thread is parameterized or not, aside from trial and error? I see a private m_Delegate, but nothing public...

    Read the article

  • stop thread that does not get interrupted

    - by prmatta
    I have a thread that sits and reads objects off of an ObjectInputStream: public void run() { try { ois = new ObjectInputStream(clientSocket.getInputStream()); Object o; while ((o = ois.readObject()) != null) { //do something with object } } catch (Exception ex) { //Log exception } } readObject does not throw InterruptedException and as far as I can tell, no exception is thrown when this thread is interrupted. How do I stop this thread?

    Read the article

  • Does add() on LinkedBlockingQueue notify waiting threads?

    - by obvio171
    I have a consumer thread taking elements from a LinkedBlockingQueue, and I make it sleep manually when it's empty. I use peek() to see if the queue empty because I have to do stuff because sending the thread to sleep, and I do that with queue.wait(). So, when I'm in another thread and add()an element to the queue, does that automatically notify the thread that was wait()ing on the queue?

    Read the article

  • Reading ResultSet from multiple threads

    - by superdario
    Hello, In the database, I have a definition table that is read from the application once upon starting. This definition table rarely changes, so it makes sense to read it once and restart the application every time it changes. However, after the table is read (put into a ResultSet), it will be read by multiple handlers running in their own threads. How do you suggest to accomplish this? My idea was to populate a CachedRowSet, and then create a copy of this set (through the createCopy() method) for each handler every time a new request comes. Do you think this is wise? Does this offer a good performance? Thanks.

    Read the article

  • Can getAttribute() method of Tomcat ServletContext implementation be called without synchronization?

    - by oo_olo_oo
    I would like to read some parameters during servlet initializtion (in init() method), and store them among servlet context attributes (using getServletContext().setAttribute()). I would like to read these parameters later - during some request processing (using getServletContext().getAttribute()). So, the multiple threads could do this simultaneously. My question is if such an attempt is safe? Could I be sure that multi threaded calls to the getAttribute() don't mess up any internal state of the servlet context? Please take into account that I'm not going to call the setAttribute() anywhere besides the initialization. So, only calls to the getAttribute() are going to be done from multiple threads. But depending on the internal implementation, this also could be dangerous. So, any information about Tomcat's implementation would be appreciated.

    Read the article

  • Catching the redirected address from NSURLConnection

    - by Vic
    I'm working on a software which follows the HTTP redirection which is dynamically calculated by the server depending on a pparameter. I don't want to show the primary server in Mobile Safari but rather the redirected address only. The following code workks: request = [NSMutableURLRequest requestWithURL:originalUrl cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; // Extract the redirected URL target = [response URL]; The problem is that the server requires several seconds to answer. The sendSynchronousRequest blocks the app for this time completely which is messy, I can't even display the "Busy" animation. Does anyone know how I can retrieve the redirected address asynchronously without safari appearance in the meanwhile with the redirecting server URL or display some sort of the "Be patient" animation during the sendSynchronousRequest? What disadvantages would have the passing of sendSynchronousRequest in another thread?

    Read the article

  • How to use locks/synchronization here

    - by MasterGberry
    I have this code block here and i need to make sure the rankedPlayersWaitingForMatch is synchronized between threads properly. I was going to use synchronize but that i don't think will work here because of the variable being used in the if statement. I read online about final Lock lock = new ReentrantLock(); but I am a bit confused on how to use it in this case properly with the try/finally block. Can I get a quick example? Thanks // start synchronization if (rankedPlayersWaitingForMatch.get(rankedType).size() >= 2) { Player player1 = rankedPlayersWaitingForMatch.get(rankedType).remove(); Player player2 = rankedPlayersWaitingForMatch.get(rankedType).remove(); // end synchronization // ... I don't want this all to be synchronized, just after the first 2 remove() } else { // end synchronization // ... }

    Read the article

  • how to share a variable between two threads

    - by prmatta
    I just inherited some code, two threads within this code need to perform a system task. One thread should do the system task before the other thread. They should not be performing the system task together. The two threads do not have references to each other. Now, I know I can use some sort of a semaphore to achieve this. But my question is what is the right way to get both threads to access this semaphore. I could create a static variable/method a new class : public class SharedSemaphore { private static Semaphore s = new Semaphore (1, true); public static void performSystemTask () { s.acquire(); } public static void donePerformingSystemTask() { s.release(); } } This would work (right?) but this doesn't seem like the right thing to do. Because, the threads now have access to a semaphore, without ever having a reference to it. This sort of thing doesn't seem like a good programming practice. Am I wrong?

    Read the article

  • Python : How to close a UDP socket while is waiting for data in recv ?

    - by alexroat
    Hello, let's consider this code in python: import socket import threading import sys import select class UDPServer: def __init__(self): self.s=None self.t=None def start(self,port=8888): if not self.s: self.s=socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.s.bind(("",port)) self.t=threading.Thread(target=self.run) self.t.start() def stop(self): if self.s: self.s.close() self.t.join() self.t=None def run(self): while True: try: #receive data data,addr=self.s.recvfrom(1024) self.onPacket(addr,data) except: break self.s=None def onPacket(self,addr,data): print addr,data us=UDPServer() while True: sys.stdout.write("UDP server> ") cmd=sys.stdin.readline() if cmd=="start\n": print "starting server..." us.start(8888) print "done" elif cmd=="stop\n": print "stopping server..." us.stop() print "done" elif cmd=="quit\n": print "Quitting ..." us.stop() break; print "bye bye" It runs an interactive shell with which I can start and stop an UDP server. The server is implemented through a class which launches a thread in which there's a infinite loop of recv/*onPacket* callback inside a try/except block which should detect the error and the exits from the loop. What I expect is that when I type "stop" on the shell the socket is closed and an exception is raised by the recvfrom function because of the invalidation of the file descriptor. Instead, it seems that recvfrom still to block the thread waiting for data even after the close call. Why this strange behavior ? I've always used this patter to implements an UDP server in C++ and JAVA and it always worked. I've tried also with a "select" passing a list with the socket to the xread argument, in order to get an event of file descriptor disruption from select instead that from recvfrom, but select seems to be "insensible" to the close too. I need to have a unique code which maintain the same behavior on Linux and Windows with python 2.5 - 2.6. Thanks.

    Read the article

  • Have threads run indefinitely in a java application

    - by TP
    I am trying to program a game in which I have a Table class and each person sitting at the table is a separate thread. The game involves the people passing tokens around and then stopping when the party chime sounds. how do i program the run() method so that once I start the person threads, they do not die and are alive until the end of the game One solution that I tried was having a while (true) {} loop in the run() method but that increases my CPU utilization to around 60-70 percent. Is there a better method?

    Read the article

  • how to write silverlight threading function in another file or project

    - by Piyush
    I am using three tier architecture.I have SilverlightUI and UIController two projects.SilverlightUI contains only UI pages and controls while UIController contains all proxies of WCF services. Now I have created threads to update my controls dynamically and to do processing parallel.AS the requirement I want to define all functionality of threads in UIController projects.What should I do? Currenty what I am doing - private void Button_Click(object sender, RoutedEventArgs e) { StartThreads(); } private void StartThreads() { private Thread _thread1; _thread1 = new Thread(DoThread1); _thread1.Start(); } public static void DoThread1() { _data1.Dispatcher.BeginInvoke(delegate() { _data1.Text = _count1.ToString(); }); System.Threading.Thread.Sleep(1000); } I Want to write DoThread1() method in UIController project and call that function from here button_click()

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • Is it safe to draw three separate QImages in three separate QThreads?

    - by yan bellavance
    I have a QMainWindow with three widgets inside that are promoted to a class containing a subclassed QThread. They each draw on a local QImage in their rexpective qthread which is sent with a signal once its drawn and then rendered by calling update (mandlebrot example) from the slot. Is this safe or dangerous? They do not share any data, at least none that I am generating and am wondering what data they could be sharing that is outside of my coding range ie that is generated by Qt automatically.

    Read the article

  • How to improve multi-threaded access to Cache (custom implementation)

    - by Andy
    I have a custom Cache implementation, which allows to cache TCacheable<TKey> descendants using LRU (Least Recently Used) cache replacement algorithm. Every time an element is accessed, it is bubbled up to the top of the LRU queue using the following synchronized function: // a single instance is created to handle all TCacheable<T> elements public class Cache() { private object syncQueue = new object(); private void topQueue(TCacheable<T> el) { lock (syncQueue) if (newest != el) { if (el.elder != null) el.elder.newer = el.newer; if (el.newer != null) el.newer.elder = el.elder; if (oldest == el) oldest = el.newer; if (oldest == null) oldest = el; if (newest != null) newest.newer = el; el.newer = null; el.elder = newest; newest = el; } } } The bottleneck in this function is the lock() operator, which limits cache access to just one thread at a time. Question: Is it possible to get rid of lock(syncQueue) in this function while still preserving the queue integrity?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >