Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 22/66 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Java multithreaded server - each connection returns data. Processing on main thread?

    - by oliwr
    I am writing a client with an integrated server that should wait indefinitely for new connections - and handle each on a Thread. I want to process the received byte array in a system wide available message handler on the main thread. However, currently the processing is obviously done on the client thread. I've looked at Futures, submit() of ExecutorService, but as I create my Client-Connections within the Server, the data would be returned to the Server thread. How can I return it from there onto the main thread (in a synchronized packet store maybe?) to process it without blocking the server? My current implementation looks like this: public class Server extends Thread { private int port; private ExecutorService threadPool; public Server(int port) { this.port = port; // 50 simultaneous connections threadPool = Executors.newFixedThreadPool(50); } public void run() { try{ ServerSocket listener = new ServerSocket(this.port); System.out.println("Listening on Port " + this.port); Socket connection; while(true){ try { connection = listener.accept(); System.out.println("Accepted client " + connection.getInetAddress()); connection.setSoTimeout(4000); ClientHandler conn_c= new ClientHandler(connection); threadPool.execute(conn_c); } catch (IOException e) { System.out.println("IOException on connection: " + e); } } } catch (IOException e) { System.out.println("IOException on socket listen: " + e); e.printStackTrace(); threadPool.shutdown(); } } } class ClientHandler implements Runnable { private Socket connection; ClientHandler(Socket connection) { this.connection=connection; } @Override public void run() { try { // Read data from the InputStream, buffered int count; byte[] buffer = new byte[8192]; InputStream is = connection.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); // While there is data in the stream, read it while ((count = is.read(buffer)) > 0) { out.write(buffer, 0, count); } is.close(); out.close(); System.out.println("Disconnect client " + connection.getInetAddress()); connection.close(); // handle the received data MessageHandler.handle(out.toByteArray()); } catch (IOException e) { System.out.println("IOException on socket read: " + e); e.printStackTrace(); } return; } }

    Read the article

  • SendMessage to window created by AllocateHWND cause deadlock

    - by user2704265
    In my Delphi project, I derive a thread class TMyThread, and follow the advice from forums to use AllocateHWnd to create a window handle. In TMyThread object, I call SendMessage to send message to the window handle. When the messages sent are in small volume, then the application works well. However, when the messages are in large volume, the application will deadlock and lose responses. I think may be the message queue is full as in LogWndProc, there are only codes to process the message, but no codes to remove the messages from the queue, that may cause all the processed messages still exist in the queue and the queue becomes full. Is that correct? The codes are attached below: var hLogWnd: HWND = 0; procedure TForm1.FormCreate(Sender: TObject); begin hLogWnd := AllocateHWnd(LogWndProc); end; procedure TForm1.FormDestroy(Sender: TObject); begin if hLogWnd <> 0 then DeallocateHWnd(hLogWnd); end; procedure TForm1.LogWndProc(var Message: TMessage); var S: PString; begin if Message.Msg = WM_UPDATEDATA then begin S := PString(msg.LParam); try List1.Items.Add(S^); finally Dispose(S); end; end else Message.Result := DefWindowProc(hLogWnd, Message.Msg, Message.WParam, Message.LParam); end; procedure TMyThread.SendLog(I: Integer); var Log: PString; begin New(Log); Log^ := 'Log: current stag is ' + IntToStr(I); SendMessage(hLogWnd, WM_UPDATEDATA, 0, LPARAM(Log)); Dispose(Log); end;

    Read the article

  • AppDomain.CurrentDomain.DomainUnload not be raised in Console app

    - by Guy
    I have an assembly that when accessed spins up a single thread to process items placed on a queue. In that assembly I attach a handler to the DomainUnload event: AppDomain.CurrentDomain.DomainUnload += new EventHandler(CurrentDomain_DomainUnload); That handler joins the thread to the main thread so that all items on the queue can complete processing before the application terminates. The problem that I am experiencing is that the DomainUnload event is not getting fired when the console application terminates. Any ideas why this would be? Using .NET 3.5 and C#

    Read the article

  • controlling threads flow

    - by owca
    I had a task to write simple game simulating two players picking up 1-3 matches one after another until the pile is gone. I managed to do it for computer choosing random value of matches but now I'd like to go further and allow humans to play the game. Here's what I already have : http://paste.pocoo.org/show/201761/ Class Player is a computer player, and PlayerMan should be human being. Problem is, that thread of PlayerMan should wait until proper value of matches is given but I cannot make it work this way. Logic is as follows: thread runs until matches equals to zero. If player number is correct at the moment function pickMatches() is called. After decreasing number of matches on table, thread should wait and another thread should be notified. I know I must use wait() and notify() but I can't place them right. Class Shared keeps the value of current player, and also amount of matches. public void suspendThread() { suspended = true; } public void resumeThread() { suspended = false; } @Override public void run(){ int matches=1; int which = 0; int tmp=0; Shared data = this.selectData(); String name = this.returnName(); int number = this.getNumber(); while(data.getMatches() != 0){ while(!suspended){ try{ which = data.getCurrent(); if(number == which){ matches = pickMatches(); tmp = data.getMatches() - matches; data.setMatches(tmp, number); if(data.getMatches() == 0){ System.out.println(" "+ name+" takes "+matches+" matches."); System.out.println("Winner is player: "+name); stop(); } System.out.println(" "+ name+" takes "+matches+" matches."); if(number != 0){ data.setCurrent(0); } else{ data.setCurrent(1); } } this.suspendThread(); notifyAll(); wait(); }catch(InterruptedException exc) {} } } } @Override synchronized public int pickMatches(){ Scanner scanner = new Scanner(System.in); int n = 0; Shared data = this.selectData(); System.out.println("Choose amount of matches (from 1 to 3): "); if(data.getMatches() == 1){ System.out.println("There's only 1 match left !"); while(n != 1){ n = scanner.nextInt(); } } else{ do{ n = scanner.nextInt(); } while(n <= 1 && n >= 3); } return n; } }

    Read the article

  • Raise event from http listener (Async listener handler)

    - by Sean
    Hello, I have created an simple web server, leveraging .NET HttpListener class. I am using ThreadPool.QueueUserWorkItem() to spawn a thread to listen to incoming requests. Threaded method uses HttpListener.BeginGetContext(callback, listener), and in callback method I resume with HttpListener.EndGetContext() as well as raise an even to notify UI that listener received data. This is the question - how to raise that event? Initially I used ThreadPool: ThreadPool.QueueUserWorkItem(state => ReceivedRequest(httpListenerContext, receivedRequestArgs)); But then started to doubt, maybe it should be a dedicated thread (as appose to waiting for a thread from pool): new Thread(() => ReceivedRequest(httpListenerContext, receivedRequestArgs)).Start(); Thoughts? 10X

    Read the article

  • Deadlock sample in .net?

    - by DotNetBeginner
    Can anybody give a simple Deadlock sample code in c# ? And please tell the simplest way to find deadlock in your C# code sample. (May be the tool which will detect the dead lock in the given sample code.) NOTE: I have VS 2008

    Read the article

  • How do you implement Software Transactional Memory?

    - by Joseph Garvin
    In terms of actual low level atomic instructions and memory fences (I assume they're used), how do you implement STM? The part that's mysterious to me is that given some arbitrary chunk of code, you need a way to go back afterward and determine if the values used in each step were valid. How do you do that, and how do you do it efficiently? This would also seem to suggest that just like any other 'locking' solution you want to keep your critical sections as small as possible (to decrease the probability of a conflict), am I right? Also, can STM simply detect "another thread entered this area while the computation was executing, therefore the computation is invalid" or can it actually detect whether clobbered values were used (and thus by luck sometimes two threads may execute the same critical section simultaneously without need for rollback)?

    Read the article

  • Do .NET Timers Run Asynchronously?

    - by MrEdmundo
    I have a messaging aspect of my application using Jabber-net (an XMPP library.) What I would like to do, if for some reason the connection to the Server is ended, is keep trying to connect every minute or so. If I start a Timer to wait for a period of time before the next attempt, does that timer run asynchronously and the resulting Tick event join the main thread, or would I need to start my own thread and start the timer from within there?

    Read the article

  • C# thread with multiple parameters

    - by Lucas B
    Does anyone know how to pass multiple parameters into a Thread.Start routine? I thought of extending the class, but the C# Thread class is sealed. Here is what I think the code would look like: ... Thread standardTCPServerThread = new Thread(startSocketServerAsThread); standardServerThread.Start( orchestrator, initializeMemberBalance, arg, 60000); ... } static void startSocketServerAsThread(ServiceOrchestrator orchestrator, List<int> memberBalances, string arg, int port) { startSocketServer(orchestrator, memberBalances, arg, port); } Thank you in advance. BTW, I start a number of threads with different orchestrators, balances and ports. Please consider thread safety also.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • How can I limit access to a particular class to one caller at a time in a web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time?

    Read the article

  • Controllers and threads

    - by user72185
    Hi, I'm seeing this code in a project and I wonder if it is safe to do: (ASP.NET MVC 2.0) class MyController { void ActionResult SomeAction() { System.Threading.Thread newThread = new System.Threading.Thread(AsyncFunc); newThread.Start(); } void AsyncFunc() { string someString = HttpContext.Request.UrlReferrer.Authority + Url.Action("Index", new { controller = "AnotherAction" } ); } } Is the controller reused, possibly changing the content of HttpContext.Request and Url, or is this fine (except for not using the thread pool). Thanks for info!

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • Pipe data from InputStream to OutputStream in Java

    - by Wangnick
    Dear all, I'd like to send a file contained in a ZIP archive unzipped to an external program for further decoding and to read the result back into Java. ZipInputStream zis = new ZipInputStream(new FileInputStream(ZIPPATH)); Process decoder = new ProcessBuilder(DECODER).start(); ??? BufferedReader br = new BufferedReader(new InputStreamReader( decoder.getInputStream(),"us-ascii")); for (String line = br.readLine(); line!=null; line = br.readLine()) { ... } What do I need to put into ??? to pipe the zis content to the decoder.getOutputStream()? I guess a dedicated thread is needed, as the decoder process might block when its output is not consumed.

    Read the article

  • Limiting one of each Runnable type in ExecutorService queue.

    - by Andrew
    I have an Executors.newFixedThreadPool(1) that I send several different tasks to (all implementing Runnable), and they get queued up and run sequentially correct? What is the best way to only allow one of each task to be either running or queued up at one time? I want to ignore all tasks sent to the ExecutorService that are already in the queue.

    Read the article

  • Using ThreadPool.QueueUserWorkItem - thread unexpectedly exits

    - by alex
    I have the following method: public void PutFile(string ID, Stream content) { try { ThreadPool.QueueUserWorkItem(o => putFileWorker(ID, content)); } catch (Exception ex) { OnPutFileError(this, new ExceptionEventArgs { Exception = ex }); } } The putFileWorker method looks like this: private void putFileWorker(string ID, Stream content) { //Get bucket name: var bucketName = getBucketName(ID) .ToLower(); //get file key var fileKey = getFileKey(ID); try { //if the bucket doesn't exist, create it if (!Amazon.S3.Util.AmazonS3Util.DoesS3BucketExist(bucketName, s3client)) s3client.PutBucket(new PutBucketRequest { BucketName = bucketName, BucketRegion = S3Region.EU }); PutObjectRequest request = new PutObjectRequest(); request.WithBucketName(bucketName) .WithKey(fileKey) .WithInputStream(content); S3Response response = s3client.PutObject(request); var xx = response.Headers; OnPutFileCompleted(this, new ValueEventArgs { Value = ID }); } catch (Exception e) { OnPutFileError(this, new ExceptionEventArgs { Exception = e }); } } I've created a little console app to test this. I wire up event handlers for the OnPutFileError and OnPutFileCompleted events. If I call my PutFile method, and step into this, it gets to the "//if the bucket doesn't exist, create it" line, then exits. No exception, no errors, nothing. It doesn't complete (i've set breakpoints on my event handlers too) - it just exits. If I run the same method without the ThreadPool.QueueUserWorkItem then it runs fine... Am I missing something?

    Read the article

  • Python thread pool similar to the multiprocessing Pool?

    - by Martin
    Is there a Pool class for worker threads, similar to the multiprocessing module's Pool class? I like for example the easy way to parallelize a map function def long_running_func(p): c_func_no_gil(p) p = multiprocessing.Pool(4) xs = p.map(long_running_func, range(100)) however I would like to do it without the overhead of creating new processes. I know about the GIL. However, in my usecase, the function will be an IO-bound C function for which the python wrapper will release the GIL before the actual function call. Do I have to write my own threading pool?

    Read the article

  • Some more multitasking java issues

    - by owca
    I had a task to write simple game simulating two players picking up 1-3 matches one after another until the pile is gone. I managed to do it for computer choosing random value of matches but now I'd like to go further and allow humans to play the game. Here's what I already have : http://paste.pocoo.org/show/200660/ Class Player is a computer player, and PlayerMan should be human being. Problem is, that thread of PlayerMan should wait until proper value of matches is given but I cannot make it work this way. When I type the values it sometimes catches them and decrease amount of matches but that's not exactly what I was up to :) Logics is : I check the value of current player. If it corresponds to this of the thread currently active I use scanner to catch the amount of matches. Else I wait one second (I know it's kinda harsh solution, but I have no other idea how to do it). Class Shared keeps the value of current player, and also amount of matches. By the way, is there any way I can make Player and Shared attributes private instead of public and still make the code work ? CONSOLE and INPUT-DIALOG is just for choosing way of inserting values. class PlayerMan extends Player{ static final int CONSOLE=0; static final int INPUT_DIALOG=1; private int input; public PlayerMan(String name, Shared data, int c){ super(name, data); input = c; } @Override public void run(){ Scanner scanner = new Scanner(System.in); int n = 0; System.out.println("Matches on table: "+data.matchesAmount); System.out.println("which: "+data.which); System.out.println("number: "+number); while(data.matchesAmount != 0){ if(number == data.which){ System.out.println("Choose amount of matches (from 1 to 3): "); n = scanner.nextInt(); if(data.matchesAmount == 1){ System.out.println("There's only 1 match left !"); while(n != 1){ n = scanner.nextInt(); } } else{ do{ n = scanner.nextInt(); } while(n <= 1 && n >= 3); } data.matchesAmount = data.matchesAmount - n; System.out.println(" "+ name+" takes "+n+" matches."); if(number != 0){ data.which = 0; } else{ data.which = 1; } } else{ try { Thread.sleep(1000); } catch(InterruptedException exc) { System.out.println("End of thread."); return; } } System.out.println("Matches on table: "+data.matchesAmount); } if(data.matchesAmount == 0){ System.out.println("Winner is player: "+name); stop(); } } }

    Read the article

  • Clojure best way to achieve multiple threads?

    - by user286702
    Hello! I am working on a MUD client written in Clojure. Right now, I need two different threads. One which receives input from the user and sends it out to the MUD (via a simple Socket), and one that reads and displays output from the MUD, to the user. Should I just use Java Threads, or is there some Clojure-specific feature I should be turning to?

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • Best way to fetch data from a single database table with multiple threads?

    - by Ravi Bhatt
    Hi, we have a system where we collect data every second on user activity on multiple web sites. we dump that data into a database X (say MS SQL Server). we now need to fetch data from this single table from daatbase X and insert into database Y (say mySql). we want to fetch time based data from database X through multiple threads so that we fetch as fast as we can. Once fetched and stored in database Y, we will delete data from database X. Are there any best practices on this sort of design? any specific things to take care on table design like sharing or something? Are there any other things that we need to take care to make sure we fetch it as fast as we can from threads running on multiple machines? Thanks in advance! Ravi

    Read the article

  • Django - Threading in views without hanging the server

    - by bobthabuilda
    One of my applications in my Django project require each request/visitor to that instance to have their own thread. This might sound confusing, so I'll describe what I'm looking to accomplish in a case based scenario, with steps: User visits application Thread starts Until the thread finishes, that user's server instance hangs Once the thread completes, a response is delivered to the user Other visitors to the site should not be affected by any other users using the application How can I accomplish something like this? If possible, I'd like to find a lightweight solution.

    Read the article

  • Volatile or synchronized for primitive type?

    - by DKSRathore
    In java, assignment is atomic if the size of the variable is less that or equal to 32 bits but is not if more than 32 bits. What(volatile/synchronized) would be more efficient to use in case of double or long assignment. like, volatile double x = y; synchronized is not applicable with primitive argument. How do i use synchronized in this case. Of course I don't want to lock my class. so this should not be used.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >