Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 23/66 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • How to log correct context with Threadpool threads using log4net?

    - by myotherme
    I am trying to find a way to log useful context from a bunch of threads. The problem is that a lot of code is dealt with on Events that are arriving via threadpool threads (as far as I can tell) so their names are not in relation to any context. The problem can be demonstrated with the following code: class Program { private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); static void Main(string[] args) { new Thread(TestThis).Start("ThreadA"); new Thread(TestThis).Start("ThreadB"); Console.ReadLine(); } private static void TestThis(object name) { var nameStr = (string)name; Thread.CurrentThread.Name = nameStr; log4net.ThreadContext.Properties["ThreadContext"] = nameStr; log4net.LogicalThreadContext.Properties["LogicalThreadContext"] = nameStr; log.Debug("From Thread itself"); ThreadPool.QueueUserWorkItem(x => log.Debug("From threadpool Thread: " + nameStr)); } } The Conversion pattern is: %date [%thread] %-5level %logger [%property] - %message%newline The output is like so: 2010-05-21 15:08:02,357 [ThreadA] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadA, log4net:HostName=xxx, ThreadContext=ThreadA}] - From Thread itself 2010-05-21 15:08:02,357 [ThreadB] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadB, log4net:HostName=xxx, ThreadContext=ThreadB}] - From Thread itself 2010-05-21 15:08:02,404 [7] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadA 2010-05-21 15:08:02,420 [16] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadB As you can see the last two rows have no Names of useful information to distinguish the 2 threads, other than manually adding the name to the message (which I want to avoid). How can I get the Name/Context into the log for the threadpool threads without adding it to the message at every call?

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • Rendering to a single Bitmap object from multiple threads

    - by Lee Treveil
    What im doing is rendering a number of bitmaps to a single bitmap. There could be hundreds of images and the bitmap being rendered to could be over 1000x1000 pixels. Im hoping to speed up this process by using multiple threads but since the Bitmap object is not thread-safe it cant be rendered to directly concurrently. What im thinking is to split the large bitmap into sections per cpu, render them separately then join them back together at the end. I haven't done this yet incase you guys/girls have any better suggestions. Any ideas? Thanks

    Read the article

  • What are shared by multi threads in the same process?

    - by skydoor
    I found that each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory. My questions, what are shared by the multi threads in the same process? What I can imagine is 1) address space of the process; 2) stack, register; 3) variables Can any body elaborate it and add more?

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • calling concurrently Graphics.Draw and new Bitmap from memory in thread take long time

    - by Abdul jalil
    Example1 public partial class Form1 : Form { public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<Bitmap> queue = new Queue<Bitmap>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(bmp); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); Bitmap bmp = queue.Dequeue(); if (bmp != null) { // Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); } } when Creating bitmap and Drawing to picture box are in seperate thread then Bitmap bmp = new Bitmap(ms[x]) take 45.591 millisecond and g.DrawImage(bmp,new Point(0,0)) take 41.430 milisecond when i make bitmap from memoryStream and draw it to picture box in one thread then Bitmap bmp = new Bitmap(ms[x]) take 29.619 and g.DrawImage(bmp,new Point(0,0)) take 35.540 the code is for Example 2 is why it take more time to draw and bitmap take time in seperate thread and how to reduce the time when processing in seperate thread. i am using ANTS performance profiler 4.3 public Form1() { InitializeComponent(); pro = new Thread(new ThreadStart(Producer)); con = new Thread(new ThreadStart(Consumer)); } private AutoResetEvent m_DataAvailableEvent = new AutoResetEvent(false); Queue<MemoryStream> queue = new Queue<MemoryStream>(); Thread pro; Thread con ; public void Producer() { MemoryStream[] ms = new MemoryStream[3]; for (int y = 0; y < 3; y++) { StreamReader reader = new StreamReader("image"+(y+1)+".JPG"); BinaryReader breader = new BinaryReader(reader.BaseStream); byte[] buffer=new byte[reader.BaseStream.Length]; breader.Read(buffer,0,buffer.Length); ms[y] = new MemoryStream(buffer); } while (true) { for (int x = 0; x < 3; x++) { // Bitmap bmp = new Bitmap(ms[x]); queue.Enqueue(ms[x]); m_DataAvailableEvent.Set(); Thread.Sleep(6); } } } public void Consumer() { Graphics g= pictureBox1.CreateGraphics(); while (true) { m_DataAvailableEvent.WaitOne(); //Bitmap bmp = queue.Dequeue(); MemoryStream ms= queue.Dequeue(); if (ms != null) { Bitmap bmp = new Bitmap(ms); g.DrawImage(bmp,new Point(0,0)); bmp.Dispose(); } } } private void pictureBox1_Click(object sender, EventArgs e) { con.Start(); pro.Start(); }

    Read the article

  • Java Backgroundworker: Scope of Widget to be updated unclear

    - by erlord
    Hi all, I am trying to understand the mechanism of org.jdesktop.swingx.BackgroundWorker. Their javadoc presents following example: final JLabel label; class MeaningOfLifeFinder implements BackgroundListener { public void doInBackground(BackgroundEvent evt) { String meaningOfLife = findTheMeaningOfLife(); evt.getWorker().publish(meaningOfLife); } public void process(BackgroundEvent evt) { label.setText("" + evt.getData()); } public void done(BackgroundEvent evt) {} public void started(BackgroundEvent evt) {} } (new MeaningOfLifeFinder()).execute(); Apart from the fact that I doubt the result will ever get published, I wonder how label is passed to the process method, where it is being updated. I thought it's scope is limited to the outside of the BackgroudListener implementation. Quite confused I am ... any answers for me? Thanks in advance

    Read the article

  • Lock-Free, Wait-Free and Wait-freedom algorithms for non-blocking multi-thread synchronization.

    - by GJ
    In multi thread programming we can find different terms for data transfer synchronization between two or more threads/tasks. When exactly we can say that some algorithem is: 1)Lock-Free 2)Wait-Free 3)Wait-Freedom I understand what means Lock-free but when we can say that some synchronization algorithm is Wait-Free or Wait-Freedom? I have made some code (ring buffer) for multi-thread synchronization and it use Lock-Free methods but: 1) Algorithm predicts maximum execution time of this routine. 2) Therad which call this routine at beginning set unique reference, what mean that is inside of this routine. 3) Other threads which are calling the same routine check this reference and if is set than count the CPU tick count (measure time) of first involved thread. If that time is to long interrupt the current work of involved thread and overrides him job. 4) Thread which not finished job because was interrupted from task scheduler (is reposed) at the end check the reference if not belongs to him repeat the job again. So this algorithm is not really Lock-free but there is no memory lock in use, and other involved threads can wait (or not) certain time before overide the job of reposed thread. Added RingBuffer.InsertLeft function: function TgjRingBuffer.InsertLeft(const link: pointer): integer; var AtStartReference: cardinal; CPUTimeStamp : int64; CurrentLeft : pointer; CurrentReference: cardinal; NewLeft : PReferencedPtr; Reference : cardinal; label TryAgain; begin Reference := GetThreadId + 1; //Reference.bit0 := 1 with rbRingBuffer^ do begin TryAgain: //Set Left.Reference with respect to all other cores :) CPUTimeStamp := GetCPUTimeStamp + LoopTicks; AtStartReference := Left.Reference OR 1; //Reference.bit0 := 1 repeat CurrentReference := Left.Reference; until (CurrentReference AND 1 = 0)or (GetCPUTimeStamp - CPUTimeStamp > 0); //No threads present in ring buffer or current thread timeout if ((CurrentReference AND 1 <> 0) and (AtStartReference <> CurrentReference)) or not CAS32(CurrentReference, Reference, Left.Reference) then goto TryAgain; //Calculate RingBuffer NewLeft address CurrentLeft := Left.Link; NewLeft := pointer(cardinal(CurrentLeft) - SizeOf(TReferencedPtr)); if cardinal(NewLeft) < cardinal(@Buffer) then NewLeft := EndBuffer; //Calcolate distance result := integer(Right.Link) - Integer(NewLeft); //Check buffer full if result = 0 then //Clear Reference if task still own reference if CAS32(Reference, 0, Left.Reference) then Exit else goto TryAgain; //Set NewLeft.Reference NewLeft^.Reference := Reference; SFence; //Try to set link and try to exchange NewLeft and clear Reference if task own reference if (Reference <> Left.Reference) or not CAS64(NewLeft^.Link, Reference, link, Reference, NewLeft^) or not CAS64(CurrentLeft, Reference, NewLeft, 0, Left) then goto TryAgain; //Calcolate result if result < 0 then result := Length - integer(cardinal(not Result) div SizeOf(TReferencedPtr)) else result := cardinal(result) div SizeOf(TReferencedPtr); end; //with end; { TgjRingBuffer.InsertLeft } RingBuffer unit you can find here: RingBuffer, CAS functions: FockFreePrimitives, and test program: RingBufferFlowTest Thanks in advance, GJ

    Read the article

  • Form gets disposed somehow

    - by mnn
    I have a client-server application, in which I use classic Sockets and threads for receiving/sending data and listening for clients. The application works fine, but after some random time I get the ObjectDisposedException: System.ObjectDisposedException: Cannot access a disposed object. Object name: 'MainForm'. at System.Windows.Forms.Control.MarshaledInvoke(Control caller, Delegate method, Object[] args, Boolean synchronous) at System.Windows.Forms.Control.Invoke(Delegate method, Object[] args) at System.Windows.Forms.Control.Invoke(Delegate method) That code is called from client Socket thread and I use Invoke() method to run the code on UI thread. I'm sure that I don't manually dispose the form nor using Close() (form is closed by user clicking Close button), so I don't know what could cause its disposing.

    Read the article

  • Sending message from working non-gui thread to the main window

    - by bartek
    I'm using WinApi. Is SendMessage/PostMessage a good, thread safe method of communicating with the main window? Suppose, the working thread is creating a bitmap, that must be displayed on the screen. The working thread allocates a bitmap, sends a message with a pointer to this bitmap and waits until GUI thread processes it (for example using SendMessage). The working thread shares no data with other threads. Am I running into troubles with such design? Are there any other possibilities that do not introduce thread synchronizing, locking etc. ?

    Read the article

  • Core dump of a multithreaded program

    - by benjamin button
    Hi, i have regularly worked with single threaded programs. i never saw a multithreded program crashing since i havent worked on any. is there any difference between both teh core dumps? is there any additional information provided in the core dump of a multithreaded program when compared to a single threaded program?

    Read the article

  • Multi:Threading - Is this the right approach?

    - by HonorGod
    Experts - I need some advice in the following scenario. I have a configuration file with a list of tasks. Each task can have zero, one or more dependencies. I wanted to execute these tasks in parallel [right now they are being executed sequentially] The idea is to have a main program to read the configuration file and load all the tasks. Read individual tasks and give it to an executor [callable] that will perform the task and return results in a Future. When the task is submitted to the executor (thread) it will monitor for its dependencies to finish first and perform its own task. Is this the right approach? Are there any other better approaches using java 1.5 features?

    Read the article

  • How to give highest priority to events generated from main thread than those generated from secondar

    - by martjno
    I have a c++ application written in wxWidgets, which has a main thread (GUI) and a working thread (calculations). The working thread executes commands requested by the main thread and communicates the result to the main thread posting an event after every step of the processing. The problem is that when the working thread is sending many events consecutively, the gui requests made by the user (i.e. interrupt the processing clicking a button) won't be processed by the event handler until the working thread has finished. This is actually happening on OSX, on Windows it works perfectly. I've tried to wxThread::SetPriority and wxThread::Yield but nothing changes. It is working if I put wxThread::Sleep in the working thread, but this slows down very much the processing.

    Read the article

  • deciding between subprocess, multiprocesser and thread in Python?

    - by user248237
    I'd like to parallelize my Python program so that it can make use of multiple processors on the machine that it runs on. My parallelization is very simple, in that all the parallel "threads" of the program are independent and write their output to separate files. I don't need the threads to exchange information but it is imperative that I know when the threads finish since some steps of my pipeline depend on their output. Portability is important, in that I'd like this to run on any Python version on Mac, Linux and Windows. Given these constraints, which is the most appropriate Python module for implementing this? I am tryign to decide between thread, subprocess and multiprocessing, which all seem to provide related functionality. Any thoughts on this? I'd like the simplest solution that's portable. Thanks.

    Read the article

  • How can I limit access to a particular class to one caller at a time in an ASMX web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time? Or does each caller get their own copy of _lock, rendering my code here laughable?

    Read the article

  • Difference in output from use of synchronized keyword and join()

    - by user2964080
    I have 2 classes, public class Account { private int balance = 50; public int getBalance() { return balance; } public void withdraw(int amt){ this.balance -= amt; } } and public class DangerousAccount implements Runnable{ private Account acct = new Account(); public static void main(String[] args) throws InterruptedException{ DangerousAccount target = new DangerousAccount(); Thread t1 = new Thread(target); Thread t2 = new Thread(target); t1.setName("Ravi"); t2.setName("Prakash"); t1.start(); /* #1 t1.join(); */ t2.start(); } public void run(){ for(int i=0; i<5; i++){ makeWithdrawl(10); if(acct.getBalance() < 0) System.out.println("Account Overdrawn"); } } public void makeWithdrawl(int amt){ if(acct.getBalance() >= amt){ System.out.println(Thread.currentThread().getName() + " is going to withdraw"); try{ Thread.sleep(500); }catch(InterruptedException e){ e.printStackTrace(); } acct.withdraw(amt); System.out.println(Thread.currentThread().getName() + " has finished the withdrawl"); }else{ System.out.println("Not Enough Money For " + Thread.currentThread().getName() + " to withdraw"); } } } I tried adding synchronized keyword in makeWithdrawl method public synchronized void makeWithdrawl(int amt){ and I keep getting this output as many times I try Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw This shows that only Thread t1 is working... If I un-comment the the line saying t1.join(); I get the same output. So how does synchronized differ from join() ? If I don't use synchronize keyword or join() I get various outputs like Ravi is going to withdraw Prakash is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Account Overdrawn Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn So how does the output from synchronized differ from join() ?

    Read the article

  • Swingworker producing duplicate output/output out of order?

    - by Stefan Kendall
    What is the proper way to guarantee delivery when using a SwingWorker? I'm trying to route data from an InputStream to a JTextArea, and I'm running my SwingWorker with the execute method. I think I'm following the example here, but I'm getting out of order results, duplicates, and general nonsense. Here is my non-working SwingWorker: class InputStreamOutputWorker extends SwingWorker<List<String>,String> { private InputStream is; private JTextArea output; public InputStreamOutputWorker(InputStream is, JTextArea output) { this.is = is; this.output = output; } @Override protected List<String> doInBackground() throws Exception { byte[] data = new byte[4 * 1024]; int len = 0; while ((len = is.read(data)) > 0) { String line = new String(data).trim(); publish(line); } return null; } @Override protected void process( List<String> chunks ) { for( String s : chunks ) { output.append(s + "\n"); } } }

    Read the article

  • Do condition variables still need a mutex if you're changing the checked value atomically?

    - by Joseph Garvin
    Here is the typical way to use a condition variable: // The reader(s) lock(some_mutex); if(protected_by_mutex_var != desired_value) some_condition.wait(some_mutex); unlock(some_mutex); // The writer lock(some_mutex); protected_by_mutex_var = desired_value; unlock(some_mutex); some_condition.notify_all(); But if protected_by_mutex_var is set atomically by say, a compare-and-swap instruction, does the mutex serve any purpose (other than that pthreads and other APIs require you to pass in a mutex)? Is it protecting state used to implement the condition? If not, is it safe then to do this?: // The writer protected_by_mutex_var = desired_value; some_condition.notify_all(); With the writer never directly interacting with the reader's mutex? If so, is it even necessary that different readers use the same mutex?

    Read the article

  • Interrupt an Http Request blocked in read() on Android

    - by twk
    Using the Apache Http stack on Android, I'm trying to force a thread out of a call to read. This is what the stack looks like: OSNetworkSystem.receiveStreamImpl(FileDescriptor, byte[], int, int, int) line: not available [native method] OSNetworkSystem.receiveStream(FileDescriptor, byte[], int, int, int) line: 478 PlainSocketImpl.read(byte[], int, int) line: 565 SocketInputStream.read(byte[], int, int) line: 87 SocketInputBuffer(AbstractSessionInputBuffer).fillBuffer() line: 103 SocketInputBuffer(AbstractSessionInputBuffer).read(byte[], int, int) line: 134 IdentityInputStream.read(byte[], int, int) line: 86 EofSensorInputStream.read(byte[], int, int) line: 159 Fetcher.readStream() line: 89 I've tried InputStream.close(), Thread.Interrupt(), and HttpUriRequest.abort(), without any success. Any ideas? I'm also open to some kind of non-blocking IO, but I don't see any way to do that with the HttpUriRequest object. Thanks!

    Read the article

  • java share data between thread

    - by ayush
    i have a java process that reads data from a socket server. Thus i have a BufferedReader and a PrintWriter object corresponding to that socket. Now in the same java process i have a multithreaded java server that accepts client connections. I want to achieve a functionality where all these clients that i accept can read data from the BufferedReader object that i mentioned above.(so that they can multiplex the data) How do i make these individual client threads read the data from BuffereReader single object? Sorry for the confusion.

    Read the article

  • Java Swing Threading with Updatable JProgressBar

    - by Anthony Sparks
    First off I've been working with Java's Concurency package quite a bit lately but I have found an issue that I am stuck on. I want to have and Application and the Application can have a SplashScreen with a status bar and the loading of other data. So I decided to use SwingUtilities.invokeAndWait( call the splash component here ). The SplashScreen then appears with a JProgressBar and runs a group of threads. But I can't seem to get a good handle on things. I've looked over SwingWorker and tried using it for this purpose but the thread just returns. Here is a bit of sudo-code. and the points I'm trying to achieve. Have an Application that has a SplashScreen that pauses while loading info Be able to run multiple threads under the SplashScreen Have the progress bar of the SplashScreen Update-able yet not exit until all threads are done. Launching splash screen try { SwingUtilities.invokeAndWait( SplashScreen ); } catch (InterruptedException e) { } catch (InvocationTargetException e) { } Splash screen construction SplashScreen extends JFrame implements Runnable{ public void run() { //run threads //while updating status bar } } I have tried many things including SwingWorkers, Threads using CountDownLatch's, and others. The CountDownLatch's actually worked in the manner I wanted to do the processing but I was unable to update the GUI. When using the SwingWorkers either the invokeAndWait was basically nullified (which is their purpose) or it wouldn't update the GUI still even when using a PropertyChangedListener. If someone else has a couple ideas it would be great to hear them. Thanks in advance.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >