Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 37/66 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • Why are functional languages considered a boon for multi threaded environments?

    - by Billy ONeal
    I hear a lot about functional languages, and how they scale well because there is no state around a function; and therefore that function can be massively parallelized. However, this makes little sense to me because almost all real-world practical programs need/have state to take care of. I also find it interesting that most major scaling libraries, i.e. MapReduce, are typically written in imperative languages like C or C++. I'd like to hear from the functional camp where this hype I'm hearing is coming from....

    Read the article

  • help me reason about F# threads

    - by Kevin Cantu
    In goofing around with some F# (via MonoDevelop), I have written a routine which lists files in a directory with one thread: let rec loop (path:string) = Array.append ( path |> Directory.GetFiles ) ( path |> Directory.GetDirectories |> Array.map loop |> Array.concat ) And then an asynchronous version of it: let rec loopPar (path:string) = Array.append ( path |> Directory.GetFiles ) ( let paths = path |> Directory.GetDirectories if paths <> [||] then [| for p in paths -> async { return (loopPar p) } |] |> Async.Parallel |> Async.RunSynchronously |> Array.concat else [||] ) On small directories, the asynchronous version works fine. On bigger directories (e.g. many thousands of directories and files), the asynchronous version seems to hang. What am I missing? I know that creating thousands of threads is never going to be the most efficient solution -- I only have 8 CPUs -- but I am baffled that for larger directories the asynchronous function just doesn't respond (even after a half hour). It doesn't visibly fail, though, which baffles me. Is there a thread pool which is exhausted? How do these threads actually work?

    Read the article

  • java methods and race condition in a jsp/servlets application.

    - by A.S al-shammari
    Hi. Suppose that I have a method called doSomething() and I want to use this method in a multithreaded application (each servlet inherits from HttpServlet).I'm wondering if it is possible that a race condition will occur in the following cases: doSomething() is not staic method and it writes values to a database. doSomething() is static method but it does not write values to a database. what I have noticed that many methods in my application may lead to a race condition or dirty read/write. for example , I have a Poll System , and for each voting operation, a certain method will change a single cell value for that poll as the following: [poll_id | poll_data ] [1 | {choice_1 : 10, choice_2 : 20}] will the JSP/Servlets app solve these issues by itself, or I have to solve all that by myself? Thanks..

    Read the article

  • C++ threaded class design from non-threaded class

    - by macs
    I'm working on a library doing audio encoding/decoding. The encoder shall be able to use multiple cores (i.e. multiple threads, using boost library), if available. What i have right now is a class that performs all encoding-relevant operations. The next step i want to take is to make that class threaded. So i'm wondering how to do this. I thought about writing a thread-class, creating n threads for n cores and then calling the encoder with the appropriate arguments. But maybe this is an overkill and there is no need for another class, so i'm going to make use of the "user interface" for thread-creation. I hope there are any suggestions.

    Read the article

  • Simple multi-threading - combining statements to two lines.

    - by Adam
    If I have: ThreadStart starter = delegate { MessageBox.Show("Test"); }; new Thread(starter).Start(); How can I combine this into one line of code? I've tried: new Thread(delegate { MessageBox.Show("Test"); }).Start(); But I get this error: The call is ambiguous between the following methods or properties: 'System.Threading.Thread.Thread(System.Threading.ThreadStart)' and 'System.Threading.Thread.Thread(System.Threading.ParameterizedThreadStart)'

    Read the article

  • My Thread Programs Crash

    - by zp26
    I have a problem with threads objectiveC. The line of code below contains the recv block the program waiting for a datum. My intention is to launch a thread parallel to the program so that this statement does not block any application. I put this code in my program but when active switch the program crashes. Enter the code. -(IBAction)Chat{ if(switchChat.on){ buttonInvio.enabled = TRUE; fieldInvio.enabled = TRUE; [NSThread detachNewThreadSelector:@selector(riceviDatiServer) toTarget:self withObject:nil]; } else { buttonInvio.enabled = FALSE; fieldInvio.enabled = FALSE; } -(void)riceviDatiServer{ NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; int ricevuti; NSString *datiRicevuti; ricevuti = recv(temp, &datiRicevuti, datiRicevuti.length, 0); labelRicezione.text = [[NSString alloc] initWithFormat:@"%s.... %d", datiRicevuti, ricevuti]; [pool release]; }

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • Several client waiting for the same event

    - by ff8mania
    I'm developing a communication API to be used by a lot of generic clients to communicate with a proprietary system. This proprietary system exposes an API, and I use a particular classes to send and wait messages from this system: obviously the system alert me that a message is ready using an event. The event is named OnMessageArrived. My idea is to expose a simple SendSyncMessage(message) method that helps the user/client to simply send a message and the method returns the response. The client: using ( Communicator c = new Communicator() ) { response = c.SendSync(message); } The communicator class is done in this way: public class Communicator : IDisposable { // Proprietary system object ExternalSystem c; String currentRespone; Guid currentGUID; private readonly ManualResetEvent _manualResetEvent; private ManualResetEvent _manualResetEvent2; String systemName = "system"; String ServerName = "server"; public Communicator() { _manualResetEvent = new ManualResetEvent(false); //This methods are from the proprietary system API c = SystemInstance.CreateInstance(); c.Connect(systemName , ServerName); } private void ConnectionStarter( object data ) { c.OnMessageArrivedEvent += c_OnMessageArrivedEvent; _manualResetEvent.WaitOne(); c.OnMessageArrivedEvent-= c_OnMessageArrivedEvent; } public String SendSync( String Message ) { Thread _internalThread = new Thread(ConnectionStarter); _internalThread.Start(c); _manualResetEvent2 = new ManualResetEvent(false); String toRet; int messageID; currentGUID = Guid.NewGuid(); c.SendMessage(Message, "Request", currentGUID.ToString()); _manualResetEvent2.WaitOne(); toRet = currentRespone; return toRet; } void c_OnMessageArrivedEvent( int Id, string root, string guid, int TimeOut, out int ReturnCode ) { if ( !guid.Equals(currentGUID.ToString()) ) { _manualResetEvent2.Set(); ReturnCode = 0; return; } object newMessage; c.FetchMessage(Id, 7, out newMessage); currentRespone = newMessage.ToString(); ReturnCode = 0; _manualResetEvent2.Set(); } } I'm really noob in using waithandle, but my idea was to create an instance that sends the message and waits for an event. As soon as the event arrived, checks if the message is the one I expect (checking the unique guid), otherwise continues to wait for the next event. This because could be (and usually is in this way) a lot of clients working concurrently, and I want them to work parallel. As I implemented my stuff, at the moment if I run client 1, client 2 and client 3, client 2 starts sending message as soon as client 1 has finished, and client 3 as client 2 has finished: not what I'm trying to do. Can you help me to fix my code and get my target? Thanks!

    Read the article

  • Is System.nanoTime() consistent across threads?

    - by obvio171
    I want to count the time elapsed between two events in nanoseconds. To do that, I can use System.nanoTime() as mentioned here. The problem is that the two events are happening in different threads. Since nanoTime() doesn't return an absolute timestamp but instead can only be used to calculate time differences, I'd like to know if the values I get on the two different threads are consistent with the physical time elapsed between the two events.

    Read the article

  • Limiting the number of threads executing a method at a single time.

    - by Steve_
    We have a situation where we want to limit the number of paralell requests our application can make to its application server. We have potentially 100+ background threads running that will want to at some point make a call to the application server but only want 5 threads to be able to call SendMessage() (or whatever the method will be) at any one time. What is the best way of achieving this? I have considered using some sort of gatekeeper object that blocks threads coming into the method until the number of threads executing in it has dropped below the threshold. Would this be a reasonable solution or am I overlooking the fact that this might be dirty/dangerous? We are developing in C#.NET 3.5. Thanks, Steve

    Read the article

  • How to manage db connections on server?

    - by simpatico
    I have a severe problem with my database connection in my web application. Since I use a single database connection for the whole application from singleton Database class, if i try concurrent db operations (two users) the database rollsback the transactions. This is my static method used: All threads/servlets call static Database.doSomething(...) methods, which in turn call the the below method. private static /* synchronized*/ Connection getConnection(final boolean autoCommit) throws SQLException { if (con == null) { con = new MyRegistrationBean().getConnection(); } con.setAutoCommit(true); //TODO return con; } What's the recommended way to manage this db connection/s I have, so that I don't incurr in the same problem.

    Read the article

  • How do I create a Thread Manager for an Android App ?

    - by MrBuBBLs
    Hi, I would like to know how to start and code a thread manager for my Android App. My app is going to fill a list with a network I/O and I have to manage threads for that. I never done this before and I don't know where to start. I heard about Thread Pool and other stuff, but I'm quite confused. Could someone please help me make my way through ? Thanks

    Read the article

  • How to automatically run in the background?

    - by Hun1Ahpu
    I'm not sure that it's not implemented yet, I hope that it is. But I know that in .Net programmers should manually run time-consuming task in the background thread. So every time we handle some UI event and we understand that this will take some time we also understand that this will hang UI thread and our application. And then we make all this Background work things and handle callbacks or whatever. So my question is: Is there in some language/platform a mechanism that will automatically run time-consuming tasks in the background and will do all related work itself? So we just write the code for handling specific UI event and this code will be somehow detected as time-consuming and will be executed in background. And if there isn't, then why?

    Read the article

  • How to maintain a pool of names ?

    - by Jacques René Mesrine
    I need to maintain a list of userids (proxy accounts) which will be dished out to multithreaded clients. Basically the clients will use the userids to perform actions; but for this question, it is not important what these actions are. When a client gets hold of a userid, it is not available to other clients until the action is completed. I'm trying to think of a concurrent data structure to maintain this pool of userids. Any ideas ? Would a ConcurrentQueue do the job ? Clients will dequeue a userid, and add back the userid when they are finished with it.

    Read the article

  • Thread.sleep vs Monitor.Wait vs RegisteredWaitHandle?

    - by Royi Namir
    (the following items has different goals , but im interesting knowing how they "PAUSEd") questions Thread.sleep - Does it impact performance on a system ?does it tie up a thread with its wait ? what about Monitor.Wait ? what is the difference in the way they "wait"? do they tie up a thread with their wait ? what aboutRegisteredWaitHandle ? This method accepts a delegate that is executed when a wait handle is signaled. While it’s waiting, it doesn’t tie up a thread. so some thread are paused and can be woken by a delegate , while others just wait ? spin ? can someone please make things clearer ? edit http://www.albahari.com/threading/part2.aspx

    Read the article

  • c++11 atomic ordering: extended total order memory_order_seq_cst for locks

    - by itaj
    There's this note in c++11 29.3-p3: [ Note: Although it is not explicitly required that S include locks, it can always be extended to an order that does include lock and unlock operations, since the ordering between those is already included in the "happens before" ordering. - end note ] What does it mean by "always"? I can understand that any certain implementation can be designed to support such an extended S. But in some general implementation that wasn't designed for it, I don't see that S can be extended so. I had sent this question to comp.std.c++ but got no answers there. http://groups.google.com/group/comp.std.c++/browse_frm/thread/5242fa70d0594d1b#

    Read the article

  • Fixed strptime exception with thread lock, but slows down the program

    - by eWizardII
    I have the following code, which when is running inside of a thread (the full code is here - https://github.com/eWizardII/homobabel/blob/master/lovebird.py) for null in range(0,1): while True: try: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='w') as f: f.write('[') threadLock.acquire() for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): if i>0: f.write(", ") f.write("%s" % (json.dumps(dict(sc=seed.author.statuses_count)))) j = j + 1 threadLock.release() f.write("]") except tweepy.TweepError, e: with open('C:/Twitter/tweets/user_0_' + str(self.id) + '.json', mode='a') as f: f.write("]") print "ERROR on " + str(self.ip) + " Reason: ", e with open('C:/Twitter/errors_0.txt', mode='a') as a_file: new_ii = "ERROR on " + str(self.ip) + " Reason: " + str(e) + "\n" a_file.write(new_ii) break Now without the thread lock I generate the following error: Exception in thread Thread-117: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 530, in __bootstrap_inner self.run() File "C:/Twitter/homobabel/lovebird.py", line 62, in run for i, seed in enumerate(Cursor(api.user_timeline,screen_name=self.ip).items(200)): File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 110, in next self.current_page = self.page_iterator.next() File "build\bdist.win-amd64\egg\tweepy\cursor.py", line 85, in next items = self.method(page=self.current_page, *self.args, **self.kargs) File "build\bdist.win-amd64\egg\tweepy\binder.py", line 196, in _call return method.execute() File "build\bdist.win-amd64\egg\tweepy\binder.py", line 182, in execute result = self.api.parser.parse(self, resp.read()) File "build\bdist.win-amd64\egg\tweepy\parsers.py", line 75, in parse result = model.parse_list(method.api, json) File "build\bdist.win-amd64\egg\tweepy\models.py", line 38, in parse_list results.append(cls.parse(api, obj)) File "build\bdist.win-amd64\egg\tweepy\models.py", line 49, in parse user = User.parse(api, v) File "build\bdist.win-amd64\egg\tweepy\models.py", line 86, in parse setattr(user, k, parse_datetime(v)) File "build\bdist.win-amd64\egg\tweepy\utils.py", line 17, in parse_datetime date = datetime(*(time.strptime(string, '%a %b %d %H:%M:%S +0000 %Y')[0:6])) File "C:\Python27\lib\_strptime.py", line 454, in _strptime_time return _strptime(data_string, format)[0] File "C:\Python27\lib\_strptime.py", line 300, in _strptime _TimeRE_cache = TimeRE() File "C:\Python27\lib\_strptime.py", line 188, in __init__ self.locale_time = LocaleTime() File "C:\Python27\lib\_strptime.py", line 77, in __init__ raise ValueError("locale changed during initialization") ValueError: locale changed during initialization The problem is with thread lock on, each thread runs itself serially basically, and it takes way to long for each loop to run for there to be any advantage to having a thread anymore. So if there isn't a way to get rid of the thread lock, is there a way to have it run the for loop inside of the try statement faster?

    Read the article

  • Pass a Message From Thread to Update UI

    - by Jay Dee
    Ive created a new thread for a file browser. The thread reads the contents of a directory. What I want to do is update the UI thread to draw a graphical representation of the files and folders. I know I can't update the UI from within a new thread so what I want to do is: whilst the file scanning thread iterates through a directories files and folders pass a file path string back to the UI thread. The handler in the UI thread then draws the graphical representation of the file passed back. public class New_Project extends Activity implements Runnable { private Handler handler = new Handler() { @Override public void handleMessage(Message msg) { Log.d("New Thread","Proccess Complete."); Intent intent = new Intent(); setResult(RESULT_OK, intent); finish(); } }; public void getFiles(){ //if (!XMLEFunctions.canReadExternal(this)) return; pd = ProgressDialog.show(this, "Reading Directory.", "Please Wait...", true, false); Log.d("New Thread","Called"); Thread thread = new Thread(this); thread.start(); } public void run() { Log.d("New Thread","Reading Files"); getFiles(); handler.sendEmptyMessage(0); } public void getFiles() { for (int i=0;i<=allFiles.length-1;i++){ //I WANT TO PASS THE FILE PATH BACK TU A HANDLER IN THE UI //SO IT CAN BE DRAWN. **passFilePathBackToBeDrawn(allFiles[i].toString());** } } }

    Read the article

  • Polling servers at the same port - Threads and Java

    - by John
    Hi there. I'm currently busy working on an IP ban tool for the early versions of Call of Duty 1. (Apparently such a feature wasn't implemented in these versions). I've finished a single threaded application but it won't perform well enough for multiple servers, which is why I am trying to implement threading. Right now, each server has its own thread. I have a Networking class, which has a method; "GetStatus" -- this method is synchronized. This method uses a DatagramSocket to communicate with the server. Since this method is static and synchronized, I shouldn't get in trouble and receive a whole bunch of "Address already in use" exceptions. However, I have a second method named "SendMessage". This method is supposed to send a message to the server. How can I make sure "SendMessage" cannot be invoked when there's already a thread running in "GetStatus", and the other way around? If I make both synchronized, I will still get in trouble if Thread A is opening a socket on Port 99999 and invoking "SendMessage" while Thread B is opening a socket on the same port and invoking "GetStatus"? (Game servers are usually hosted on the same ports) I guess what I am really after is a way to make an entire class synchronized, so that only one method can be invoked and run at a time by a single thread. Hope that what I am trying to accomplish/avoid is made clear in this text. Any help is greatly appreciated.

    Read the article

  • Java - Call to start method on thread : how does it route to Runnable interface's run () ?

    - by Bhaskar
    Ok , I know the two standard ways to create a new thread and run it in Java : 1 Implement Runnable in a class , define run method ,and pass an instance of the class to a new Thread. When the start method on the thread instance is called , the run method of the class instance will be invoked. 2 Let the class derive from Thread, so it can to override the method run() and then when a new instance's start method is called , the call is routed to overridden method. In both methods , basically a new Thread object is created and its start method invoked. However , while in the second method , the mechanism of the call being routed to the user defined run() method is very clear ,( its a simple runtime polymorphism in play ), I dont understand how the call to start method on the Thread object gets routed to run() method of the class implementing Runnable interface. Does the Thread class have an private field of Type Runnable which it checks first , and if it is set then invokes the run method if it set to an object ? that would be a strange mechanism IMO. How does the call to start() on a thread get routed to the run method of the Runnable interface implemented by the class whose object is passed as a parameter when contructing the thread ?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >