Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 40/66 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Are memory barriers necessary for atomic reference counting shared immutable data?

    - by Dietrich Epp
    I have some immutable data structures that I would like to manage using reference counts, sharing them across threads on an SMP system. Here's what the release code looks like: void avocado_release(struct avocado *p) { if (atomic_dec(p->refcount) == 0) { free(p->pit); free(p->juicy_innards); free(p); } } Does atomic_dec need a memory barrier in it? If so, what kind of memory barrier? Additional notes: The application must run on PowerPC and x86, so any processor-specific information is welcomed. I already know about the GCC atomic builtins. As for immutability, the refcount is the only field that changes over the duration of the object.

    Read the article

  • can a python script know that another instance of the same script is running... and then talk to it?

    - by Justin Grant
    I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original insance before the new instance commits suicide. How can I do this in a cross-platform way? Specifically, I'd like to enable the following behavior: "foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it. every few minutes the same script is launched again, but with different command-line parameters when launched, the script should see if any other instances are running. if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit. instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform. So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another? Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible. I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option. More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them. This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing. But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.

    Read the article

  • .NET threading: how can I capture an abort on an unstarted thread?

    - by Groxx
    I have a chunk of threads I wish to run in order, on an ASP site running .NET 2.0 with Visual Studio 2008 (no idea how much all that matters, but there it is), and they may have aborted-clean-up code which should be run regardless of how far through their task they are. So I make a thread like this: Thread t = new Thread(delegate() { try { /* do things */ System.Diagnostics.Debug.WriteLine("try"); } catch (ThreadAbortException) { /* cleanup */ System.Diagnostics.Debug.WriteLine("catch"); } }); Now, if I wish to abort the set of threads part way through, the cleanup may still be desirable later on down the line. Looking through MSDN implies you can .Abort() a thread that has not started, and then .Start() it, at which point it will receive the exception and perform normally. Or you can .Join() the aborted thread to wait for it to finish aborting. Presumably you can combine them. http://msdn.microsoft.com/en-us/library/ty8d3wta(v=VS.80).aspx To wait until a thread has aborted, you can call the Join method on the thread after calling the Abort method, but there is no guarantee the wait will end. If Abort is called on a thread that has not been started, the thread will abort when Start is called. If Abort is called on a thread that is blocked or is sleeping, the thread is interrupted and then aborted. Now, when I debug and step through this code: t.Abort(); // ThreadState == Unstarted | AbortRequested t.Start(); // throws ThreadStartException: "Thread failed to start." // so I comment it out, and t.Join(); // throws ThreadStateException: "Thread has not been started." At no point do I see any output, nor do any breakpoints on either the try or catch block get reached. Oddly, ThreadStartException is not listed as a possible throw of .Start(), from here: http://msdn.microsoft.com/en-us/library/a9fyxz7d(v=VS.80).aspx (or any other version) I understand this could be avoided by having a start parameter, which states if the thread should jump to cleanup code, and foregoing the Abort call (which is probably what I'll do). And I could .Start() the thread, and then .Abort() it. But as an indeterminate amount of time may pass between .Start and .Abort, I'm considering it unreliable, and the documentation seems to say my original method should work. Am I missing something? Is the documentation wrong? edit: ow. And you can't call .Start(param) on a non-parameterized Thread(Start). Is there a way to find out if a thread is parameterized or not, aside from trial and error? I see a private m_Delegate, but nothing public...

    Read the article

  • Threading calls to web service in a web service - (.net 2.0)

    - by Ryan Ternier
    Got a question regarding best practices for doing parallel web service calls, in a web service. Our portal will get a message, split that message into 2 messages, and then do 2 calls to our broker. These need to be on separate threads to lower the timeout. One solution is to do something similar to (pseudo code): XmlNode DNode = GetaGetDemoNodeSomehow(); XmlNode ENode = GetAGetElNodeSomehow(); XmlNode elResponse; XmlNode demResponse; Thread dThread = new Thread(delegate { //Web Service Call GetDemographics d = new GetDemographics(); demResponse = d.HIALRequest(DNode); }); Thread eThread = new Thread(delegate { //Web Service Call GetEligibility ge = new GetEligibility(); elResponse = ge.HIALRequest(ENode); }); dThread.Start(); eThread.Start(); dThread.Join(); eThread.Join(); //combine the resulting XML and return it. //Maybe throw a bit of logging in to make architecture happy Another option we thought of is to create a worker class, and pass it the service information and have it execute. This would allow us to have a bit more control over what is going on, but could add additional overhead. Another option brought up would be 2 asynchronous calls and manage the returns through a loop. When the calls are completed (success or error) the loop picks it up and ends. The portal service will be called about 50,000 times a day. I don't want to gold plate this sucker. I'm looking for something light weight. The services that are being called on the broker do have time out limits set, and are already heavily logged and audited, so I'm not worried on that part. This is .NET 2.0 , and as much as I would love to upgrade I can't right now. So please leave all the goodies of 2.0 out please.

    Read the article

  • Efficient implementation of threads in the given scenario

    - by shadeMe
    I've got a winforms application that is set up in the following manner: 2 buttons, a textbox, a collection K, function X and another function, Y. Function X parses a large database and enumerates some of its data in the global collection. Button 1 calls function X. Function Y walks through the above collection and prints out the data in the textbox. Button 2 calls function Y. I'd like to call function X through a worker thread in such a way that: The form remains responsive to user input. This comes intrinsically from the use of a separate thread. There is never more than a single instance of function X running at any point in time. K can be accessed by both functions at all times. What would be the most efficient implementation of the above environment ?

    Read the article

  • C++ Thread Safe Integer

    - by Paul Ridgway
    Hello everyone, I have currently created a C++ class for a thread safe integer which simply stores an integer privately and has public get a set functions which use a boost::mutex to ensure that only one change at a time can be applied to the integer. Is this the most efficient way to do it, I have been informed that mutexes are quite resource intensive? The class is used a lot, very rapidly so it could well be a bottleneck... Googleing C++ Thread Safe Integer returns unclear views and oppinions on the thread safety of integer operations on different architectures. Some say that a 32bit int on a 32bit arch is safe, but 64 on 32 isn't due to 'alignment' Others say it is compiler/OS specific (which I don't doubt). I am using Ubuntu 9.10 on 32 bit machines, some have dual cores and so threads may be executed simultaneously on different cores in some cases and I am using GCC 4.4's g++ compiler. Thanks in advance...

    Read the article

  • Multi-Threaded Application - Help with some pseudo code!!

    - by HonorGod
    I am working on a multi-threaded application and need help with some pseudo-code. To make it simpler for implementation I will try to explain that in simple terms / test case. Here is the scenario - I have an array list of strings (say 100 strings) I have a Reader Class that reads the strings and passes them to a Writer Class that prints the strings to the console. Right now this runs in a Single Thread Model. I wanted to make this multi-threaded but with the following features - Ability to set MAX_READERS Ability to set MAX_WRITERS Ability to set BATCH_SIZE So basically the code should instantiate those many Readers and Writers and do the work in parallel. Any pseudo code will really be helpful to keep me going!

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Protecting critical sections based on a condition in C#

    - by NAADEV
    Hello, I'm dealing with a courious scenario. I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid. I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that: static readonly object LockableObject = new object(); static void SaveElement(Entity e) { lock(LockableObject) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } But this implies when i have a huge ammount of requests to be saved, they will be queued. I wonder if there is something like that (please, take it just as an idea): static void SaveElement(Entity e) { (using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId) { Entity e2 = Repository.FindByKey(e); if (e2 != null) { Repository.Insert(e2); } else { Repository.Update(e2); } } } The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property. Any idea?

    Read the article

  • handling activity destruction in multithreaded android app

    - by Jayesh
    Hi, I have a multithreded app where background threads are used to load data over network or from disk/db. Every once in a while user will perform some action e.g. fetch news over network, which will spawn a background AsyncTask, but for some reason user will quit the app (press back button so that activity gets destroyed). In most such scenarios, I make appropriate checks in the background thread after it returns from n/w i/o, so that it won't crash by accessing members of the activity that is destroyed by now. However some corner cases are left where crashes happen, because the background thread would access some member of activity that is now null. Do other Android developers have some generic/recommended framework to handle such scenarios? These are the times when I wish android would have guaranteed termination of all threads when activity destroys (in the same way that regular linux process cleans up when it's quit)... but I guess Android devs had good reasons for not exposing process lifetimes through the api.

    Read the article

  • Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

    - by Nailuj
    I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough. The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this: public void startAsyncLookup(Action<LookupResult> returnLookupResult) { this.returnLookupResult = returnLookupResult; foreach (string number in numbersToLookup) { ThreadPool.QueueUserWorkItem(lookupNumber, number); } } public void lookupNumber(Object threadContext) { string numberToLookup = (string)threadContext; string url = @"http://some.url.com/?number=" + numberToLookup; WebClient webClient = new WebClient(); Stream responseData = webClient.OpenRead(url); LookupResult lookupResult = parseLookupResult(responseData); returnLookupResult(lookupResult); } I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want. Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one. A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)). The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself. So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor (see screen shot) while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections. So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)? Any ideas would be highly appreciated :) And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everyting from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.

    Read the article

  • Catching the redirected address from NSURLConnection

    - by Vic
    I'm working on a software which follows the HTTP redirection which is dynamically calculated by the server depending on a pparameter. I don't want to show the primary server in Mobile Safari but rather the redirected address only. The following code workks: request = [NSMutableURLRequest requestWithURL:originalUrl cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; // Extract the redirected URL target = [response URL]; The problem is that the server requires several seconds to answer. The sendSynchronousRequest blocks the app for this time completely which is messy, I can't even display the "Busy" animation. Does anyone know how I can retrieve the redirected address asynchronously without safari appearance in the meanwhile with the redirecting server URL or display some sort of the "Be patient" animation during the sendSynchronousRequest? What disadvantages would have the passing of sendSynchronousRequest in another thread?

    Read the article

  • java thread - run() and start() methods

    - by JavaUser
    Please explain the output of the below code: If I call th1.run() ,the output is EXTENDS RUN RUNNABLE RUN If I call th1.start() , the output is : RUNNABLE RUN EXTENDS RUN Why this inconsistency . Please explain. class ThreadExample extends Thread{ public void run(){ System.out.println("EXTENDS RUN"); } } class ThreadExampleRunnable implements Runnable { public void run(){ System.out.println("RUNNABLE RUN "); } } class ThreadExampleMain{ public static void main(String[] args){ ThreadExample th1 = new ThreadExample(); //th1.start(); th1.run(); ThreadExampleRunnable th2 = new ThreadExampleRunnable(); th2.run(); } }

    Read the article

  • Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

    - by lindelof
    I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues. When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner. Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?

    Read the article

  • Why in the following code the output is different when I compile or run it more than once

    - by Sanjeev
    class Name implements Runnable { public void run() { for (int x = 1; x <= 3; x++) { System.out.println("Run by " + Thread.currentThread().getName() + ", x is " + x); } } } public class Threadtest { public static void main(String [] args) { // Make one Runnable Name nr = new Name(); Thread one = new Thread(nr); Thread two = new Thread(nr); Thread three = new Thread(nr); one.setName("A"); two.setName("B"); three.setName("C"); one.start(); two.start(); three.start(); } } The answer is different while compiling and running more then one time I don't know why? any idea.

    Read the article

  • Best ASP.NET Background Service Implementation

    - by Jason N. Gaylord
    What's the best implementation for more than one background service in an ASP.NET application? Timer Callback Timer timer = new Timer(new TimerCallback(MyWorkCallback), HttpContext, 5000, 5000); Thread or ThreadPool Thread thread = new Thread(Work); thread.IsBackground = true; thread.Start(); BackgroundWorker BackgroundWorker worker = new BackgroundWorker(); worker.DoWork += new DoWorkEventHandler(DoMyWork); worker.RunWorkerCompleted += new RunWorkerCompletedEventHandler(DoMyWork_Completed); worker.RunWorkerAsync(); Caching like http://www.codeproject.com/KB/aspnet/ASPNETService.aspx (located in Jeff Atwood's post here) I need to run multiple background "services" at a given time. One service may run every 5 minutes where another may be once a day. It will never be more than 10 services running at a time.

    Read the article

  • Several client waiting for the same event

    - by ff8mania
    I'm developing a communication API to be used by a lot of generic clients to communicate with a proprietary system. This proprietary system exposes an API, and I use a particular classes to send and wait messages from this system: obviously the system alert me that a message is ready using an event. The event is named OnMessageArrived. My idea is to expose a simple SendSyncMessage(message) method that helps the user/client to simply send a message and the method returns the response. The client: using ( Communicator c = new Communicator() ) { response = c.SendSync(message); } The communicator class is done in this way: public class Communicator : IDisposable { // Proprietary system object ExternalSystem c; String currentRespone; Guid currentGUID; private readonly ManualResetEvent _manualResetEvent; private ManualResetEvent _manualResetEvent2; String systemName = "system"; String ServerName = "server"; public Communicator() { _manualResetEvent = new ManualResetEvent(false); //This methods are from the proprietary system API c = SystemInstance.CreateInstance(); c.Connect(systemName , ServerName); } private void ConnectionStarter( object data ) { c.OnMessageArrivedEvent += c_OnMessageArrivedEvent; _manualResetEvent.WaitOne(); c.OnMessageArrivedEvent-= c_OnMessageArrivedEvent; } public String SendSync( String Message ) { Thread _internalThread = new Thread(ConnectionStarter); _internalThread.Start(c); _manualResetEvent2 = new ManualResetEvent(false); String toRet; int messageID; currentGUID = Guid.NewGuid(); c.SendMessage(Message, "Request", currentGUID.ToString()); _manualResetEvent2.WaitOne(); toRet = currentRespone; return toRet; } void c_OnMessageArrivedEvent( int Id, string root, string guid, int TimeOut, out int ReturnCode ) { if ( !guid.Equals(currentGUID.ToString()) ) { _manualResetEvent2.Set(); ReturnCode = 0; return; } object newMessage; c.FetchMessage(Id, 7, out newMessage); currentRespone = newMessage.ToString(); ReturnCode = 0; _manualResetEvent2.Set(); } } I'm really noob in using waithandle, but my idea was to create an instance that sends the message and waits for an event. As soon as the event arrived, checks if the message is the one I expect (checking the unique guid), otherwise continues to wait for the next event. This because could be (and usually is in this way) a lot of clients working concurrently, and I want them to work parallel. As I implemented my stuff, at the moment if I run client 1, client 2 and client 3, client 2 starts sending message as soon as client 1 has finished, and client 3 as client 2 has finished: not what I'm trying to do. Can you help me to fix my code and get my target? Thanks!

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • performSelectorInBackground, notify other viewcontroller when done.

    - by Michiel
    Hi, I have a method used to save an image when the user clicks Save. I use performSelectorInBackground to save the image, the viewcontroller is popped and the previous viewcontroller is shown. I want the table (on the previousUIViewController) to reload its data when the imagesaving is done. How can I do this? The save method is called like this: [self performSelectorInBackground:@selector(saveImage) withObject:nil]; [self.navigationController popViewControllerAnimated:YES];

    Read the article

  • Why are functional languages considered a boon for multi threaded environments?

    - by Billy ONeal
    I hear a lot about functional languages, and how they scale well because there is no state around a function; and therefore that function can be massively parallelized. However, this makes little sense to me because almost all real-world practical programs need/have state to take care of. I also find it interesting that most major scaling libraries, i.e. MapReduce, are typically written in imperative languages like C or C++. I'd like to hear from the functional camp where this hype I'm hearing is coming from....

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >