Maintaining Logging and/or stdout/stderr in Python Daemon

Posted by dave mankoff on Stack Overflow See other posts from Stack Overflow or by dave mankoff
Published on 2012-11-01T15:52:25Z Indexed on 2012/12/04 5:04 UTC
Read the original article Hit count: 159

Filed under:
|
|
|

Every recipe that I've found for creating a daemon process in Python involves forking twice (for Unix) and then closing all open file descriptors. (See http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ for an example).

This is all simple enough but I seem to have an issue. On the production machine that I am setting up, my daemon is aborting - silently since all open file descriptors were closed. I am having a tricky time debugging the issue currently and am wondering what the proper way to catch and log these errors are.

What is the right way to setup logging such that it continues to work after daemonizing? Do I just call logging.basicConfig() a second time after daemonizing? What's the right way to capture stdout and stderr? I am fuzzy on the details of why all the files are closed. Ideally, my main code could just call daemon_start(pid_file) and logging would continue to work.

© Stack Overflow or respective owner

Related posts about python

Related posts about logging