I joined Michael Kennedy on the excellent Talk Python podcast, episode #304, last week to chat about the Omnilib Project, an organization of open source packages I started, and how they fit into the modern world of Python and AsyncIO. We also discuss how I got started in programming and Python, with a rare glimpse into just how nerdy I was as a child. 😅
Listen in Overcast, any podcast player, or right here:
Back in 2019–remember the before times?–I gave a talk at North Bay Python 2019 on the topics of coroutines, AsyncIO, and how they work. It’s a crazy journey from bytecode and runtime instructions to building our own terrible event loop in pure Python using nothing but generators and tears. Enjoy!
And if that was too slow, don’t miss the shorter rendition of this talk that I gave at PyCascades 2020, right before COVID ruined everything.
A week ago today, I gave a talk at PyCon Australia in Sydney. I discussed refactoring in Python, and how to build refactoring tools using nothing but the standard library, building up from concepts to syntax trees to how the lib2to3 module works. The talk finished with the announcement of the open source refactoring framework I built at Facebook, Bowler, which I built using those same concepts. I really appreciated the questions from the audience and in the halls afterwards, and we really enjoyed our time in Sydney. Thanks to everyone that made PyCon so great!
This past Friday, I presented a talk at PyCon US 2018 – the first time I’ve been fortunate enough to attend. The talk was focused on achieving high performance from modern Python services through the use of AsyncIO and the multiprocessing modules. The turnout was better than I could have ever expected, and I was really happy to hear from everyone that stopped by the Facebook booth to ask questions, discuss Facebook engineering practices, or even just say “hello”. Thank you to everyone who made my first PyCon amazing!
A short quote, and timely announcement:
I’m currently preparing a talk for PyCon 2018 on using asyncio with multiprocessing for highly-parallel monitoring and/or scraping workloads. To go with this talk, I’m working on some simple example code that I hope to publish on Github. This will be my first major conference talk, so I’m both excited and absolutely terrified! 😅
I’m looking forward to giving that talk, and will post a video here afterwards!
I’ve recently been working on a parallel processing task in Python, using the multiprocessing module’s Pool class to manage multiple worker processes. Work comes in large batches, so there are frequent periods (especially right after startup) where all of the workers are idle. Unfortunately, when workers are idle, Python’s KeyboardInterrupt is not handled correctly by the multiprocessing module, which results in not only a lot of stacktraces spewed to the console, but also means the parent process will hang indefinitely.
There is quite a lot of suggestions for mitigating this issue, such as given in this question on Stack Overflow. Many places point to Bryce Boe’s article, where he advocates rolling your own replacement for the multiprocessing module’s Pool class, but that seems to not only invite bugs and added maintenance overhead, but also doesn’t address the root cause.
I have figured out (what I think is) a better solution to the problem, and have not found anyone else mentioning it online, so I have decided to share that here. It not only solves the problem of handling the interrupt for both idle and busy worker processes, but also precludes the need for worker processes to even care about KeyboardInterrupt in the first place.
The solution is to prevent the child processes from ever receiving KeyboardInterrupt in the first place, and leaving it completely up to the parent process to catch the interrupt and clean up the process pool as it sees fit. In my opinion this is the most optimal solution, because it reduces the amount of error handling code in the child process, and prevents needless error spew from idle workers.
The following example shows how to do this, and how it works with both idle and busy workers:
#!/usr/bin/env python # Copyright (c) 2011 John Reese # Licensed under the MIT License import multiprocessing import os import signal import time def init_worker(): signal.signal(signal.SIGINT, signal.SIG_IGN) def run_worker(): time.sleep(15) def main(): print "Initializng 5 workers" pool = multiprocessing.Pool(5, init_worker) print "Starting 3 jobs of 15 seconds each" for i in range(3): pool.apply_async(run_worker) try: print "Waiting 10 seconds" time.sleep(10) except KeyboardInterrupt: print "Caught KeyboardInterrupt, terminating workers" pool.terminate() pool.join() else: print "Quitting normally" pool.close() pool.join() if __name__ == "__main__": main()
This code is also available on Github as jreese/multiprocessing-keyboardinterrupt. If you think there’s a better way to accomplish this, please feel free to fork it and submit a pull request. Otherwise, hopefully this helps settle this issue for good.