How to handle multi-processing of libraries which already spawn sub-processes?
Posted
by
exhuma
on Programmers
See other posts from Programmers
or by exhuma
Published on 2012-10-22T10:54:40Z
Indexed on
2012/10/22
11:17 UTC
Read the original article
Hit count: 221
multiprocessing
I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed.
Both, the library and script are modifiable by us.
I believe the question is more about design than actual code, but for what it's worth, it's written in Python.
The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout).
One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device.
So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere.
The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device.
Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned.
So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n
processes where n
is the number of IPs to which he broadcasted.
This has given us two problems:
- The host on which the command was executed slowed down to a near halt.
- Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296
The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations.
My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution.
Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes.
p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)
© Programmers or respective owner