Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 31/66 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Python threads all executing on a single core

    - by Rob Lourens
    I have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.

    Read the article

  • Thread-safe data structure design

    - by Inso Reiges
    Hello, I have to design a data structure that is to be used in a multi-threaded environment. The basic API is simple: insert element, remove element, retrieve element, check that element exists. The structure's implementation uses implicit locking to guarantee the atomicity of a single API call. After i implemented this it became apparent, that what i really need is atomicity across several API calls. For example if a caller needs to check the existence of an element before trying to insert it he can't do that atomically even if each single API call is atomic: if(!data_structure.exists(element)) { data_structure.insert(element); } The example is somewhat awkward, but the basic point is that we can't trust the result of "exists" call anymore after we return from atomic context (the generated assembly clearly shows a minor chance of context switch between the two calls). What i currently have in mind to solve this is exposing the lock through the data structure's public API. This way clients will have to explicitly lock things, but at least they won't have to create their own locks. Is there a better commonly-known solution to these kinds of problems? And as long as we're at it, can you advise some good literature on thread-safe design? EDIT: I have a better example. Suppose that element retrieval returns either a reference or a pointer to the stored element and not it's copy. How can a caller be protected to safely use this pointer\reference after the call returns? If you think that not returning copies is a problem, then think about deep copies, i.e. objects that should also copy another objects they point to internally. Thank you.

    Read the article

  • lock shared data using c#

    - by menacheb
    Hi, I have a program (C#) with a list of tests to do. Also, I have two thread. one to add task into the list, and one to read and remove from it the performed tasks. I'm using the 'lock' function each time one of the threads want to access to the list. Another thing I want to do is, if the list is empty, the thread who need to read from the list will sleep. and wake up when the first thread add a task to the list. Here is the code I wrote: ... List<String> myList = new List(); Thread writeThread, readThread; writeThread = new Thread(write); writeThread.Start(); readThraed = new Thread(read); readThread.Start(); ... private void write() { while(...) { ... lock(myList) { myList.Add(...); } ... if (!readThread.IsAlive) { readThraed = new Thread(read); readThread.Start(); } ... } ... } private void read() { bool noMoreTasks = false; while (!noMoreTasks) { lock (MyList)//syncronize with the ADD func. { if (dataFromClientList.Count > 0) { String task = myList.First(); myList.Remove(task); } else { noMoreTasks = true; } } ... } readThread.Abort(); } Apparently I did it wrong, and it's not performed as expected (The readTread does't read from the list). Does anyone know what is my problem, and how to make it right? Many thanks,

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • [F#] Parallelize code in nested loops

    - by Juliet
    You always hear that functional code is inherently easier to parallelize than non-functional code, so I decided to write a function which does the following: Given a input of strings, total up the number of unique characters for each string. So, given the input [ "aaaaa"; "bbb"; "ccccccc"; "abbbc" ], our method will returns a: 6; b: 6; c: 8. Here's what I've written: (* seq<#seq<char>> -> Map<char,int> *) let wordFrequency input = input |> Seq.fold (fun acc text -> (* This inner loop can be processed on its own thread *) text |> Seq.choose (fun char -> if Char.IsLetter char then Some(char) else None) |> Seq.fold (fun (acc : Map<_,_>) item -> match acc.TryFind(item) with | Some(count) -> acc.Add(item, count + 1) | None -> acc.Add(item, 1)) acc ) Map.empty This code is ideally parallelizable, because each string in input can be processed on its own thread. Its not as straightforward as it looks since the innerloop adds items to a Map shared between all of the inputs. I'd like the inner loop factored out into its own thread, and I don't want to use any mutable state. How would I re-write this function using an Async workflow?

    Read the article

  • Castle ActiveRecord: session error search and access to lazy loading property in different threads

    - by sc911
    Hi *, I've got a problem with an multi-threaded desktop application using Castle ActiveRecord in C#: To keep the GUI alive while searching for the objects based on userinput I'm using the BackgroundWorker for the search-function. Some of the properties of the objects, especially some HasMany-Relations, are marked as Lazy. Now, when the search is finished and the user selects an resulting object, some of the properties of this object should be displayed. But as the search was done by the BackgroundWorker in a different thread, accessing the properties fails as the session for the lazy-access is no longer available. What will be the best way to do the search in an extra thread to keep the GUI alive and to access all properties correctly including those marked as lazy? Thanks for any advise! Regards sc911

    Read the article

  • Thread Local Storage and local method variables

    - by miguel
    In c#, each thread has its own stack space. If this is the case, why is the following code not thread-safe? (It is stated that this code is thread-safe on this post: Locking in C# class Foo { private int count = 0; public void TrySomething() { count++; } } As count is an int (stack variable), surely this value would be isolated to an individual thread, on its own stack, and therefore thread-safe? I am probably missing something here, but I dont understand what is actually in Thread Local Storage if not stack-based variables for the thread?

    Read the article

  • Will Multi threading increase the speed of the calculation on Single Processor

    - by Harsha
    On a single processor, Will multi-threading increse the speed of the calculation. As we all know that, multi-threading is used for Increasing the User responsiveness and achieved by sepating UI thread and calculation thread. But lets talk about only console application. Will multi-threading increases the speed of the calculation. Do we get culculation result faster when we calculate through multi-threading. what about on multi cores, will multi threading increse the speed or not. Please help me. If you have any material to learn more about threading. please post. Thanks in advance, Harsha

    Read the article

  • Can threads safely read variables set by VCL events?

    - by Tom1952
    Is it safe for a thread to READ a variable set by a Delphi VCL event? When a user clicks on a VCL TCheckbox, the main thread sets a boolean to the checkbox's Checked state. CheckboxState := CheckBox1.Checked; At any time, a thread reads that variable if CheckBoxState then ... It doesn't matter if the thread "misses" a change to the boolean, because the thread checks the variable in a loop as it does other things. So it will see the state change eventually... Is this safe? Or do I need special code? Is surrounding the read and write of the variable (in the thread and main thread respectively) with critical code calls necessary and sufficient? As I said, it doesn't matter if the thread gets the "wrong" value, but I keep thinking that there might be a low-level problem if one thread tries to read a variable while the main thread is in the middle of writing it, or vice versa. My question is similar to this one: http://stackoverflow.com/questions/1353096/cross-thread-reading-of-a-variable. (Also related to my previous question: http://stackoverflow.com/questions/2449183/using-entercriticalsection-in-thread-to-update-vcl-label)

    Read the article

  • Queuing methods to be run on an object by different threads in Python

    - by Ben
    Let's say I have an object who's class definition looks like: class Command: foo = 5 def run(self, bar): time.sleep(1) self.foo = bar return self.foo If this class is instantiated once, but different threads are hitting its run method (via an HTTP request, handled separately) passing in different args, what is the best method to queue them? Can this be done in the class definition itself?

    Read the article

  • Using a Cross Thread Boolean to Abort Thread

    - by Jon
    Possible Duplicate: Can a C# thread really cache a value and ignore changes to that value on other threads? Lets say we have this code: bool KeepGoing = true; DataInThread = new Thread(new ThreadStart(DataInThreadMethod)); DataInThread.Start(); //bla bla time goes on KeepGoing = false; private void DataInThreadMethod() { while (KeepGoing) { //Do stuff } } } Now the idea is that using the boolean is a safe way to terminate the thread however because that boolean exists on the calling thread does that cause any issue? That boolean is only used on the calling thread to stop the thread so its not like its being used elsewhere

    Read the article

  • SQLserver multithreaded locking with TABLOCKX

    - by WilfriedVS
    I have a table "tbluser" with 2 fields: userid = integer (autoincrement) user = nvarchar(100) I have a multithreaded/multi server application that uses this table. I want to accomplish the following: Guarantee that field user is unique in my table Guarantee that combination userid/user is unique in each server's memory I have the following stored procedure: CREATE PROCEDURE uniqueuser @user nvarchar(100) AS BEGIN BEGIN TRAN DECLARE @userID int SET nocount ON SET @userID = (SELECT @userID FROM tbluser WITH (TABLOCKX) WHERE [user] = @user) IF @userID <> '' BEGIN SELECT userID = @userID END ELSE BEGIN INSERT INTO tbluser([user]) VALUES (@user) SELECT userID = SCOPE_IDENTITY() END COMMIT TRAN END Basically the application calls the stored procedure and provides a username as parameter. The stored procedure either gets the userid or insert the user if it is a new user. Am I correct to assume that the table is locked (only one server can insert/query)?

    Read the article

  • Are concurrency issues possible when using the WCF Service Behavoir attribute set to ConcurrencyMode

    - by Brandon Linton
    We have a WCF service that makes a good deal of transactional NHibernate calls. Occasionally we were seeing SQL timeouts, even though the calls were updating different rows and the tables were set to row level locking. After digging into the logs, it looks like different threads were entering the same point in the code (our transaction using block), and an update was hanging on commit. It didn't make sense, though, because we believed that the following service class attribute was forcing a unique execution thread per service call: [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple, InstanceContextMode = InstanceContextMode.PerCall)] We recently changed the concurrency mode to ConcurrencyMode.Single and haven't yet run into any issues, but the bug was very difficult to reproduce (if anyone has any thoughts on flushing a bug like that out, let me know!). Anyway, that all brings me to my question: shouldn't an InstanceContextMode of PerCall enforce thread-safety within the service, even if the ConcurrencyMode is set to multiple? How would it be possible for two calls to be serviced by the same service instance? Thanks!

    Read the article

  • Java multi-threading - what is the best way to monitor the activity of a number of threads?

    - by MalcomTucker
    I have a number of threads that are performing a long runing task. These threads themselves have child threads that do further subdivisions of work. What is the best way for me to track the following: How many total threads my process has created What the state of each thread currently is What part of my process each thread has currently got to I want to do it in as efficient a way as possible and once threads finish, I don't want any references to them hanging around becasuse I need to be freeing up memory as early as possible. Any advice?

    Read the article

  • Grails/Spring HttpServletRequest synchronization

    - by Jeff Storey
    I was writing a simple Grails app and I have a spot in a gsp where one of my java beans in modified. <g:each in="${myList}" status="i" var="myVar"> // if the user performs some view action, update one of the myVar elements </g:each> This works, but I don't think it's quite threadsafe. myList is an http request variable but in cases of pages that use ajax (or other client side manipulations), it is possible for two threads to be modifying the same request scope variable The Spring AbstractController class provides a setSynchronizeOnSession method. Does grails provide any equivalent functionality? If not, what's the best way to protect this non-threadsafe mutation? thanks, Jeff

    Read the article

  • Singleton & Multi-threading

    - by ronan
    Friends I have the following class that class Singleton { private: static Singleton *p_inst; Singleton(); public: static Singleton * instance() { if (!p_inst) { p_inst = new Singleton(); } return p_inst; } }; Please do elaborate on precautions taken while implementing Singleton in multi-threaded environment .. Many thanks

    Read the article

  • Python C API from C++ app - know when to lock

    - by Alex
    Hi Everyone, I am trying to write a C++ class that calls Python methods of a class that does some I/O operations (file, stdout) at once. The problem I have ran into is that my class is called from different threads: sometimes main thread, sometimes different others. Obviously I tried to apply the approach for Python calls in multi-threaded native applications. Basically everything starts from PyEval_AcquireLock and PyEval_ReleaseLock or just global locks. According to the documentation here when a thread is already locked a deadlock ensues. When my class is called from the main thread or other one that blocks Python execution I have a deadlock. Python Cfunc1() - C++ func that creates threads internally which lead to calls in "my class", It stuck on PyEval_AcquireLock, obviously the Python is already locked, i.e. waiting for C++ Cfunc1 call to complete... It completes fine if I omit those locks. Also it completes fine when Python interpreter is ready for the next user command, i.e. when thread is calling funcs in the background - not inside of a native call I am looking for a workaround. I need to distinguish whether or not the global lock is allowed, i.e. Python is not locked and ready to receive the next command... I tried PyGIL_Ensure, unfortunately I see hang. Any known API or solution for this ? (Python 2.4)

    Read the article

  • How I can set DispatcherTimer into the for loop.

    - by Jitendra Jadav
    Hello guys, I am using wpf DispatcherTimer and I want ot use it into the for loop how i can use it.. my code is here.. DispatcherTimer timer = new DispatcherTimer(); timer.Tick += (s, e) => { for (i = 0; i < 10; i++) { obsValue.Add(new Entities(i)); timer.Interval = TimeSpan.FromSeconds(30); timer.Start(); } }; Thanks....

    Read the article

  • ASP.NET Exception Handling in background threads

    - by Xodarap
    When I do ThreadPool.QueueUserWorkItem, I don't want unhandled exceptions to kill my entire process. So I do something like: ThreadPool.QueueUserWorkItem(delegate() { try { FunctionIActuallyWantToCall(); } catch { HandleException(); } }); Is this the recommended pattern? It seems like there should be a simpler way to do this. It's in an asp.net-mvc app, if that's relevant.

    Read the article

  • Java - when to use notify or notifyAll?

    - by mdma
    Why does java.lang.Object have two notify methods - notify and notifyAll? It seems that notifyAll does at least everything notify does, so why not just use notifyAll all the time? If notifyAll is used instead of notify, is the program still correct, and vice versa? What influences the choice between these two methods?

    Read the article

  • Parallel version of loop not faster than serial version

    - by Il-Bhima
    I'm writing a program in C++ to perform a simulation of particular system. For each timestep, the biggest part of the execution is taking up by a single loop. Fortunately this is embarassingly parallel, so I decided to use Boost Threads to parallelize it (I'm running on a 2 core machine). I would expect at speedup close to 2 times the serial version, since there is no locking. However I am finding that there is no speedup at all. I implemented the parallel version of the loop as follows: Wake up the two threads (they are blocked on a barrier). Each thread then performs the following: Atomically fetch and increment a global counter. Retrieve the particle with that index. Perform the computation on that particle, storing the result in a separate array Wait on a job finished barrier The main thread waits on the job finished barrier. I used this approach since it should provide good load balancing (since each computation may take differing amounts of time). I am really curious as to what could possibly cause this slowdown. I always read that atomic variables are fast, but now I'm starting to wonder whether they have their performance costs. If anybody has some ideas what to look for or any hints I would really appreciate it. I've been bashing my head on it for a week, and profiling has not revealed much.

    Read the article

  • Common Lisp implementation with CFFI and thread support on Mac, Windows, and Linux?

    - by mcandre
    Goal: Install Hunchentoot and be able to run Hunchentoot as a background thread. This is what I do: Install Common Lisp. Install Quicklisp. (ql:quickload "hunchentoot") (hunchentoot:start (make-instance 'hunchentoot:acceptor :port 4242)) The last command is supposed to start Hunchentoot, then return to the interpreter for further Common Lisp forms. For CLISP, SBCL, ABCL, ECL, and CCL, I get one of two results: Hunchentoot's dependency Bordeaux Threads fails to install. hunchentoot:start hangs. The web page never loads, and never 404s.

    Read the article

  • What is design principle behind Servlets being Singleton

    - by Sandeep Jindal
    A servlet container "generally" create one instance of a servlet and different threads of the same instance to serve multiple requests. (I know this can be changed using deprecated SingleThreadModel and other features, but this is the usual way). I thought, the simple reason behind this is performance gain, as creating threads is better than creating instances. But it seems this is not the reason. On the other hand, creating instances have little advantage that developers never have to worry about thread safety. I am trying to understand the reason for this decision over the trade-off of thread-safety.

    Read the article

  • Cumulative +1/-1 Cointoss crashes on 1000 iterations. Please advise; c++ boost random libraries

    - by user1731972
    following some former advice Multithreaded application, am I doing it right? I think I have a threadsafe number generator using boost, but my program crashes when I input 1000 iterations. The output .csv file when graphed looks right, but I'm not sure why it's crashing. It's using _beginthread, and everyone is telling me I should use the more (convoluted) _beingthreadex, which I'm not familiar with. If someone could recommend an example, I would greatly appreciate it. Also... someone pointed out I should be applying a second parameter to my _beginthread for the array counting start positions, but I have no idea how to pass more than one parameter, other than attempting to use a structure, and I've read structure's and _beginthread don't get along (although, I could just use the boost threads...) #include <process.h> #include <windows.h> #include <iostream> #include <fstream> #include <time.h> #include <random> #include <boost/random.hpp> //for srand48_r(time(NULL), &randBuffer); which doesn't work #include <stdio.h> #include <stdlib.h> //#include <thread> using namespace std; using namespace boost; using namespace boost::random; void myThread0 (void *dummy ); void myThread1 (void *dummy ); void myThread2 (void *dummy ); void myThread3 (void *dummy ); //for random seeds void initialize(); //from http://stackoverflow.com/questions/7114043/random-number-generation-in-c11-how-to-generate-how-do-they-work uniform_int_distribution<> two(1,2); typedef std::mt19937 MyRNG; // the Mersenne Twister with a popular choice of parameters uint32_t seed_val; // populate somehow MyRNG rng1; // e.g. keep one global instance (per thread) MyRNG rng2; // e.g. keep one global instance (per thread) MyRNG rng3; // e.g. keep one global instance (per thread) MyRNG rng4; // e.g. keep one global instance (per thread) //only needed for shared variables //CRITICAL_SECTION cs1,cs2,cs3,cs4; // global int main() { ofstream myfile; myfile.open ("coinToss.csv"); int rNum; long numRuns; long count = 0; int divisor = 1; float fHolder = 0; long counter = 0; float percent = 0.0; //? //unsigned threadID; //HANDLE hThread; initialize(); HANDLE hThread[4]; const int size = 100000; int array[size]; printf ("Runs (uses multiple of 100,000) "); cin >> numRuns; for (int a = 0; a < numRuns; a++) { hThread[0] = (HANDLE)_beginthread( myThread0, 0, (void*)(array) ); hThread[1] = (HANDLE)_beginthread( myThread1, 0, (void*)(array) ); hThread[2] = (HANDLE)_beginthread( myThread2, 0, (void*)(array) ); hThread[3] = (HANDLE)_beginthread( myThread3, 0, (void*)(array) ); //waits for threads to finish before continuing WaitForMultipleObjects(4, hThread, TRUE, INFINITE); //closes handles I guess? CloseHandle( hThread[0] ); CloseHandle( hThread[1] ); CloseHandle( hThread[2] ); CloseHandle( hThread[3] ); //dump array into calculations //average array into fHolder //this could be split into threads as well for (int p = 0; p < size; p++) { counter += array[p] == 2 ? 1 : -1; //cout << array[p] << endl; //cout << counter << endl; } //this fHolder calculation didn't work //fHolder = counter / size; //so I had to use this cout << counter << endl; fHolder = counter; fHolder = fHolder / size; myfile << fHolder << endl; } } void initialize() { //seed value needs to be supplied //rng1.seed(seed_val*1); rng1.seed((unsigned int)time(NULL)); rng2.seed(((unsigned int)time(NULL))*2); rng3.seed(((unsigned int)time(NULL))*3); rng4.seed(((unsigned int)time(NULL))*4); }; void myThread0 (void *param) { //EnterCriticalSection(&cs1); //aquire the critical section object int *i = (int *)param; for (int x = 0; x < 25000; x++) { //doesn't work, part of merssene twister //i[x] = next(); i[x] = two(rng1); //original srand //i[x] = rand() % 2 + 1; //doesn't work for some reason. //uint_dist2(rng); //i[x] = qrand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs1); // release the critical section object } void myThread1 (void *param) { //EnterCriticalSection(&cs2); //aquire the critical section object int *i = (int *)param; for (int x = 25000; x < 50000; x++) { //param[x] = rand() % 2 + 1; i[x] = two(rng2); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs2); // release the critical section object } void myThread2 (void *param) { //EnterCriticalSection(&cs3); //aquire the critical section object int *i = (int *)param; for (int x = 50000; x < 75000; x++) { i[x] = two(rng3); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs3); // release the critical section object } void myThread3 (void *param) { //EnterCriticalSection(&cs4); //aquire the critical section object int *i = (int *)param; for (int x = 75000; x < 100000; x++) { i[x] = two(rng4); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs4); // release the critical section object }

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >