Hanging script at the end of Python multiprocessing
While writing and stress-testing a Python script which uses the multiprocessing library, I ran into the problem that occasionally the script hangs at the end. Literally runs past the last line of code and then hangs.
In this script I'm using Queues and Events, so I made sure that I properly close the queues in the forked/spawned (tried both) processes and also clean out all queues in the parent. Nevertheless occasionally - like seldom, but it happens - the script hangs. Checked the process list, checked remaining threads, checked the queues, all fine. Still ...
When I hit Ctrl+C, I get the following stack trace:
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/util.py", line 300, in _run_finalizers
finalizer()
File "/usr/lib/python3.9/multiprocessing/util.py", line 224, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python3.9/multiprocessing/queues.py", line 201, in _finalize_join
thread.join()
File "/usr/lib/python3.9/threading.py", line 1053, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.9/threading.py", line 1073, in _wait_for_tstate_lock
if lock.acquire(block, timeout):
KeyboardInterrupt
(Can't change the Python 3.9, as I don't control the target system where this is supposed to run)
There's nothing left to join, the only two remaining threads are MainThread and QueueFeederThread.
After some more debugging I had the simple idea not only to empty the queue before finishing the script, but setting the queue variable to None:
worker_queue = None
This helps. The script is no longer hanging at the end. At a later point I will see if the problem is fixed in newer versions.