Search Results

Search found 92 results on 4 pages for 'multiprocessing'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Do Managers in Python Multiprocessing module lock the shared data?

    - by AnonProcess
    This Question has been asked before: http://stackoverflow.com/questions/2936626/how-to-share-a-dictionary-between-multiple-processes-in-python-without-locking However I have several doubts regarding the program given in the answer: The issue is that the following step isn't atomic d['blah'] += 1 Even with locking the answer provided in that question would lead to random results. Since Process 1 read value of d['blah'] saves it on the stack increments it and again writes it. In Between a Process 2 can read the d['blah'] as well. Locking means that while d['blah'] is being written or read no other process can access it. Can someone clarify my doubts?

    Read the article

  • Python - question regarding the concurrent use of `multiprocess`.

    - by orokusaki
    I want to use Python's multiprocessing to do concurrent processing without using locks (locks to me are the opposite of multiprocessing) because I want to build up multiple reports from different resources at the exact same time during a web request (normally takes about 3 seconds but with multiprocessing I can do it in .5 seconds). My problem is that, if I expose such a feature to the web and get 10 users pulling the same report at the same time, I suddenly have 60 interpreters open at the same time (which would crash the system). Is this just the common sense result of using multiprocessing, or is there a trick to get around this potential nightmare? Thanks

    Read the article

  • Python thread pool similar to the multiprocessing Pool?

    - by Martin
    Is there a Pool class for worker threads, similar to the multiprocessing module's Pool class? I like for example the easy way to parallelize a map function def long_running_func(p): c_func_no_gil(p) p = multiprocessing.Pool(4) xs = p.map(long_running_func, range(100)) however I would like to do it without the overhead of creating new processes. I know about the GIL. However, in my usecase, the function will be an IO-bound C function for which the python wrapper will release the GIL before the actual function call. Do I have to write my own threading pool?

    Read the article

  • Opinions on logging in multiprocess applications

    - by chkorn
    We have written an application that spawns at least 9 parallel processes. All processes generate a lot of logging information. Currently we are using Pythons QueueHandler to consolidate all logs into one file. Unfortunately this sometimes results in very messy files which make them hard to read (e.g. Track what exactly is going on in one thread). Do you think it is a viable option to separate all messages into dedicated files, or is this going to make things even more messy due to the high number of files? What are your general experiences when writing log files for multiprocessed/multithreaded applications?

    Read the article

  • Python-daemon doesn't kill its kids

    - by Brian M. Hunt
    When using python-daemon, I'm creating subprocesses likeso: import multiprocessing class Worker(multiprocessing.Process): def __init__(self, queue): self.queue = queue # we wait for things from this in Worker.run() ... q = multiprocessing.Queue() with daemon.DaemonContext(): for i in xrange(3): Worker(q) while True: # let the Workers do their thing q.put(_something_we_wait_for()) When I kill the parent daemonic process (i.e. not a Worker) with a Ctrl-C or SIGTERM, etc., the children don't die. How does one kill the kids? My first thought is to use atexit to kill all the workers, likeso: with daemon.DaemonContext(): workers = list() for i in xrange(3): workers.append(Worker(q)) @atexit.register def kill_the_children(): for w in workers: w.terminate() while True: # let the Workers do their thing q.put(_something_we_wait_for()) However, the children of daemons are tricky things to handle, and I'd be obliged for thoughts and input on how this ought to be done. Thank you.

    Read the article

  • Python Process won't call atexit

    - by Brian M. Hunt
    I'm trying to use atexit in a Process, but unfortunately it doesn't seem to work. Here's some example code: import time import atexit import logging import multiprocessing logging.basicConfig(level=logging.DEBUG) class W(multiprocessing.Process): def run(self): logging.debug("%s Started" % self.name) @atexit.register def log_terminate(): # ever called? logging.debug("%s Terminated!" % self.name) while True: time.sleep(10) @atexit.register def log_exit(): logging.debug("Main process terminated") logging.debug("Main process started") a = W() b = W() a.start() b.start() time.sleep(1) a.terminate() b.terminate() The output of this code is: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:Main process terminated I would expect that the W.run.log_terminate() would be called when a.terminate() and b.terminate() are called, and the output to be something likeso (emphasis added)!: DEBUG:root:Main process started DEBUG:root:W-1 Started DEBUG:root:W-2 Started DEBUG:root:W-1 Terminated! DEBUG:root:W-2 Terminated! DEBUG:root:Main process terminated Why isn't this working, and is there a better way to log a message (from the Process context) when a Process is terminated? Thank you for your input - it's much appreciated.

    Read the article

  • python interpreter waits for child process to die

    - by Moulik Kallupalam
    Contents of check.py: from multiprocessing import Process import time import sys def slp(): time.sleep(30) f=open("yeah.txt","w") f.close() if __name__=="__main__" : x=Process(target=slp) x.start() sys.exit() In windows 7, from cmd, if I call python check.py, it doesn't immediately exit, but instead waits for 30 seconds. And if I kill cmd, the child dies too- no "yeah.txt" is created. How do I make ensure the child continues to run even if parent is killed and also that the parent doesn't wait for child process to end?

    Read the article

  • Starting a seperate process

    - by jacquesb
    I want a script to start a new process, such that the new process continues running after the initial script exits. I expected that I could use multiprocessing.Process to start a new process, and set daemon=True so that the main script may exit while the created process continues running. But it seems that the second process is silently terminated when the main script exits. Is this expected behavior, or am I doing something wrong?

    Read the article

  • Parralelization in Microsoft SQL Server 2008 R2

    - by stan31337
    We have a specific accounting and production software, called 1C, which uses single user connection to the MS SQL 2008 R2 database. And there are about 500 users connecting to 1C server to perform their tasks. 1C and SQL 2008 are on separate servers. How to configure MS SQL 2008 R2 to effectively use parallelization in this configuration? We have 24 cores, and only one is loaded at 100% at MS SQL 2008 R2 server. We have already configured MS SQL max parallelizm from this MSDN article: http://msdn.microsoft.com/en-us/library/ms181007(v=sql.105).aspx Thank you!

    Read the article

  • Python multithreading not working on VPS server

    - by Sabirul Mostofa
    I am running an python multithreaded application with multiple processes which scrapes data from some websites. While running on my localhost It works great, but on the vps server I am using( Centos 5.8, 2.6 GHZ with 4 cores) performs very slow. From the nethogs command I get the network usage too low. I get around 8KBps with 15 threads. On other hand, in my PC I get the usage around 100-120KBPS. I have read about the Python GIL and threading limitations. It seems GIL never releases the lock on the VPS though it should while doing I/0 Is there any configuration in the VPS that I need to change for the threading to work properly?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

  • python multiprocess update dictionary synchronously

    - by user1050325
    I am trying to update one common dictionary through multiple processes. Could you please help me find out what is the problem with this code? I get the following output: inside function {1: 1, 2: -1} comes here inside function {1: 0, 2: 2} comes here {1: 0, 2: -1} Thanks. from multiprocessing import Lock, Process, Manager l= Lock() def computeCopyNum(test,val): l.acquire() test[val]=val print "inside function" print test l.release() return a=dict({1: 0, 2: -1}) procs=list() for i in range(1,3): p = Process(target=computeCopyNum, args=(a,i)) procs.append(p) p.start() for p in procs: p.join() print "comes here" print a

    Read the article

  • Python: how to run several scripts (or functions) at the same time under windows 7 multicore processor 64bit

    - by Gianni
    sorry for this question because there are several examples in Stackoverflow. I am writing in order to clarify some of my doubts because I am quite new in Python language. i wrote a function: def clipmyfile(inFile,poly,outFile): ... # doing something with inFile and poly and return outFile Normally I do this: clipmyfile(inFile="File1.txt",poly="poly1.shp",outFile="res1.txt") clipmyfile(inFile="File2.txt",poly="poly2.shp",outFile="res2.txt") clipmyfile(inFile="File3.txt",poly="poly3.shp",outFile="res3.txt") ...... clipmyfile(inFile="File21.txt",poly="poly21.shp",outFile="res21.txt") I had read in this example Run several python programs at the same time and i can use (but probably i wrong) from multiprocessing import Pool p = Pool(21) # like in your example, running 21 separate processes to run the function in the same time and speed my analysis I am really honest to say that I didn't understand the next step. Thanks in advance for help and suggestion Gianni

    Read the article

  • mutliprocessing.Pool.add_sync() eating up memory

    - by Austin
    I want to use multithreading to make my script faster... I'm still new to this. The Python doc assumes you already understand threading and what-not. So... I have code that looks like this from itertools import izip from multiprocessing import Pool p = Pool() for i, j in izip(hugeseta, hugesetb): p.apply_async(number_crunching, (i, j)) Which gives me great speed! However, hugeseta and hugesetb are really huge. Pool keeps all of the _i_s and _j_s in memory after they've finished their job (basically, print output to stdout). Is there any to del i, and j after they complete?

    Read the article

  • Running multiprocess applications from MATLAB

    - by Jacob
    I've written a multitprocess application in VC++ and tried to execute it with command line arguments with the system command. It runs, but only on one core --- any suggestions? Update:In fact, it doesn't even see the second core. I used OpenMP and used omp_get_max_threads() and omp_get_thread_num() to check and omp_get_max_threads() seems to be 1 when I execute the application from MATLAB but it's 2 (as is expected) if I run it from the command window.

    Read the article

  • pthreads_setaffinity_np: Invalid argument?

    - by hahuang65
    I've managed to get my pthreads program sort of working. Basically I am trying to manually set the affinity of 4 threads such that thread 1 runs on CPU 1, thread 2 runs on CPU 2, thread 3 runs on CPU 3, and thread 4 runs on CPU 4. After compiling, my code works for a few threads but not others (seems like thread 1 never works) but running the same compiled program a couple of different times gives me different results. For example: hao@Gorax:~/Desktop$ ./a.out Thread 3 is running on CPU 3 pthread_setaffinity_np: Invalid argument Thread Thread 2 is running on CPU 2 hao@Gorax:~/Desktop$ ./a.out Thread 2 is running on CPU 2 pthread_setaffinity_np: Invalid argument pthread_setaffinity_np: Invalid argument Thread 3 is running on CPU 3 Thread 3 is running on CPU 3 hao@Gorax:~/Desktop$ ./a.out Thread 2 is running on CPU 2 pthread_setaffinity_np: Invalid argument Thread 4 is running on CPU 4 Thread 4 is running on CPU 4 hao@Gorax:~/Desktop$ ./a.out pthread_setaffinity_np: Invalid argument My question is "Why does this happen? Also, why does the message sometimes print twice?" Here is the code: #define _GNU_SOURCE #include <stdio.h> #include <pthread.h> #include <stdlib.h> #include <sched.h> #include <errno.h> #define handle_error_en(en, msg) \ do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0) void *thread_function(char *message) { int s, j, number; pthread_t thread; cpu_set_t cpuset; number = (int)message; thread = pthread_self(); CPU_SET(number, &cpuset); s = pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpuset); if (s != 0) { handle_error_en(s, "pthread_setaffinity_np"); } printf("Thread %d is running on CPU %d\n", number, sched_getcpu()); exit(EXIT_SUCCESS); } int main() { pthread_t thread1, thread2, thread3, thread4; int thread1Num = 1; int thread2Num = 2; int thread3Num = 3; int thread4Num = 4; int thread1Create, thread2Create, thread3Create, thread4Create, i, temp; thread1Create = pthread_create(&thread1, NULL, (void *)thread_function, (char *)thread1Num); thread2Create = pthread_create(&thread2, NULL, (void *)thread_function, (char *)thread2Num); thread3Create = pthread_create(&thread3, NULL, (void *)thread_function, (char *)thread3Num); thread4Create = pthread_create(&thread4, NULL, (void *)thread_function, (char *)thread4Num); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); pthread_join(thread4, NULL); return 0; }

    Read the article

  • QProcess, QEventLoop - of any use for parallel-processing

    - by dlib
    I wonder whether I could use QEventLoop (QProcess?) to parallelize multiple calls to same function with Qt. What is precisely the difference with QtConcurrent or QThread? What is a process and an event loop more precisely? I read that QCoreApplication must exec() as early as possible in main() method, so that I wonder why it is different from main Thread. could you point as some efficient reference to processes and thread with Qt? I came through the official doc and those things remain unclear. Thanks and regards.

    Read the article

  • Run a .java file using ProcessBuilder

    - by David K
    I'm a novice programmer working in Eclipse, and I need to get multiple processes running (this is going to be a simulation of a multi-computer system). My initial hackup used multiple threads to multiple classes, but now I'm trying to replace the threads with processes. From my reading, I've gleaned that ProcessBuilder is the way to go. I have tried many many versions of the input you see below, but cannot for the life of me figure out how to properly use it. I am trying to run the .java files I previously created as classes (which I have modified). I eventually just made a dummy test.java to make sure my process is working properly - its only function is to print that it ran. My code for the two files are below. Am I using ProcessBuilder correctly? Is this the correct way to read the output of my subprocess? Any help would be much appreciated. David primary process package Control; import java.io.*; import java.lang.*; public class runSPARmatch { /** * @param args */ public static void main(String args[]) { try { ProcessBuilder broker = new ProcessBuilder("javac.exe","test.java","src\\Broker\\"); Process runBroker = broker.start(); Reader reader = new InputStreamReader(runBroker.getInputStream()); int ch; while((ch = reader.read())!= -1) System.out.println((char)ch); reader.close(); runBroker.waitFor(); System.out.println("Program complete"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } subprocess package Broker; public class test { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println("This works"); } }

    Read the article

  • Writing a search engine

    - by wvd
    Hello all, The title might be a bit misleading, but I couldn't figure out a better title. I'm writing a simple search engine which will search on several sites for the specific domain. To be concrete: I'm writing a search engine for hardstyle livesets/aftermovies/tracks. To do I will search on the sites who provide livesets, tracks, and such. The problem here is speed, I need to pass the search query to 5-7 sites, get the results and then use my own algorithm to display the results in a sorted order. I could just "multithread" it, but it's easier said then done so I have a few questions. What would be the best solution to this problem? Should I just multithread/process this application, so I'm going to get a bit of speed-up? Are there any other solutions or I am doing something really wrong? Thanks, William van Doorn

    Read the article

  • Parallel Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in parallel programming, but I do know how to program C++. Now, I am looking around for parallel programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other parallelism API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • Multi-Core Programming. Boost's MPI, OpenMP, TBB, or something else?

    - by unknownthreat
    Hello, I am totally a novice in Multi-Core Programming, but I do know how to program C++. Now, I am looking around for Multi-Core Programming library. I just want to give it a try, just for fun, and right now, I found 3 APIs, but I am not sure which one should I stick with. Right now, I see Boost's MPI, OpenMP and TBB. For anyone who have experienced with any of these 3 API (or any other API), could you please tell me the difference between these? Are there any factor to consider, like AMD or Intel architecture?

    Read the article

  • Incompatible types when assigning to type 'struct compartido'

    - by user1660559
    I have one problem with this code. I should create one structure and share it across 5 new process created from the father: #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <sys/sem.h> #include <time.h> struct compartido { int pid1, pid2, pid3, pid4, pid5; int propietario; int contador; int pidpadre; }; struct compartido var; int main(int argc, char *argv[]) { key_t llave1,llavesem; int idmem,idsem; llave1=ftok("/tmp",'a'); idmem=shmget(llave1,sizeof(int),IPC_CREAT|0600); if (idmem==-1) { perror ("shmget"); return 1; } var=shmat(idmem,0,0); /*This line is giving the error*/ /*rest of the code*/ } The exact error is giving is: error: incompatible types when assigning to type 'struct compartido' from type 'void *' I need to put this structure in the shared variable to be able to see and modify all those data from the 6 process (5 children and the father). What I'm doing bad? Thanks in advance and best regards,

    Read the article

  • Are indivisible operations still indivisible on multiprocessor and multicore systems?

    - by Steve314
    As per the title, plus what are the limitations and gotchas. For example, on x86 processors, alignment for most data types is optional - an optimisation rather than a requirement. That means that a pointer may be stored at an unaligned address, which in turn means that pointer might be split over a cache page boundary. Obviously this could be done if you work hard enough on any processor (picking out particular bytes etc), but not in a way where you'd still expect the write operation to be indivisible. I seriously doubt that a multicore processor can ensure that other cores can guarantee a consistent all-before or all-after view of a written pointer in this unaligned-write-crossing-a-page-boundary situation. Am I right? And are there any similar gotchas I haven't thought of?

    Read the article

  • Java respawn process

    - by Bart van Heukelom
    I'm making an editor-like program. If the user chooses File-Open in the main window I want to start a new copy of the editor process with the chosen filename as an argument. However, for that I need to know what command was used to start the first process: java -jar myapp.jar blabalsomearguments // --- need this information Open File (fileUrl) exec("java -jar myapp.jar blabalsomearguments fileUrl"); I'm not looking for an in-process solution, I've already implemented that. I'd like to have the benefits that seperate processes bring.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >