|
|
DescriptionThe purpose is twofold:
1. clarify the interface between different event loops (there can be only one SIGCHLD handler)
2. allow reimplementing the sigchld handler using a different strategy (waitpid(-1), polling, blocking waitpid in a separate thread, ...)
I included two implementations: one which calls waitpid(-1) and one which avoids it.
Point 1. is needed to allow multiple event loop implementations to interoperate within the same process. I wrote an implementation for the GLib event loop too: https://bitbucket.org/a_ba/gbulb/commits/2f409adad942107afb7bc96994235e30caa4c17c
Patch Set 1 #Patch Set 2 : second try #Patch Set 3 : new unit tests + bugfix #
Total comments: 1
Patch Set 4 : merged upstream changes #Patch Set 5 : child watcher managed by the event loop policy #Patch Set 6 : DefaultEventLoopPolicy split in two classes #Patch Set 7 : FastChildWatcher refactored + race condition + unit tests #
Total comments: 21
Patch Set 8 : nitpicks + race condition #
Total comments: 7
Patch Set 9 : second nitpicks burst #Patch Set 10 : remove all signal handlers on .close() #MessagesTotal messages: 19
I finally got to review this (sorry, it's been a busy week with other Tulip things). After patching in your code I ran the tests, only to find out that there are a large number of test failures. The failures are all due to the same thing: get_child_watcher() calls get_event_loop(), but the tests are intentionally written so that get_event_loop() raises an exception. This in turn made me realize a potential problem with your approach: the child watcher references a specific event loop. This is fine in most apps, where there's only a single event loop -- but then of course there's not much of a point to the patch. :-) It clearly fails dramatically when running the tests, which create a new event loop instance for each test case (and not always of the same kind, either). I suppose your intended use case is the one where there is one event loop per thread, and this event loop lives as long as the thread lives, and the child watcher is associated with the main thread. Perhaps the child watcher state and construction should be part of the event loop policy class? And it should try to ensure that it only calls add_signal_handler() on the (a?) loop associated with the main thread. And it should probably be deleted when that loop is closed. https://codereview.appspot.com/16700044/diff/40001/tests/test_unix_events.py File tests/test_unix_events.py (right): https://codereview.appspot.com/16700044/diff/40001/tests/test_unix_events.py#... tests/test_unix_events.py:673: with unittest.mock.patch.object(self.loop, "add_signal_handler") as self.m_add_signal_handler: This (and some similar lines below) doesn't fit in 80 chars.
Sign in to reply to this message.
On 2013/10/31 23:10:51, GvR wrote: > I finally got to review this (sorry, it's been a busy week with other Tulip > things). > > After patching in your code I ran the tests, only to find out that there are a > large number of test failures. The failures are all due to the same thing: > get_child_watcher() calls get_event_loop(), but the tests are intentionally > written so that get_event_loop() raises an exception. get_child_watcher() instanciates a default watcher if there is none, and it relies on the loop policy to know which is the current loop. Actually I already took care of these initialisation matters in test_events.py. Unfortunately rev ed7fa3d5fb67 introduced some conflicts. I merged it in the latest patch, the tests should run well now. > This in turn made me realize a potential problem with your approach: the child > watcher references a specific event loop. This is fine in most apps, where > there's only a single event loop -- but then of course there's not much of a > point to the patch. :-) It clearly fails dramatically when running the tests, > which create a new event loop instance for each test case (and not always of the > same kind, either). > > I suppose your intended use case is the one where there is one event loop per > thread, and this event loop lives as long as the thread lives, and the child > watcher is associated with the main thread. Yes, I guess this is the most common usecase. > Perhaps the child watcher state and construction should be part of the event > loop policy class? Actually I first considered this option when I started the patch. I did not do this to keep the changes minimal. Looking at you comments, I realise that the patch does address the content of SelectorEventLoop.close(). It is clear that the watcher should die when its main loop dies. Now I think it is better to handle the initialisation stuff in the policy and keep the loop<=>watcher interactions as basic as possible. > https://codereview.appspot.com/16700044/diff/40001/tests/test_unix_events.py > File tests/test_unix_events.py (right): > > https://codereview.appspot.com/16700044/diff/40001/tests/test_unix_events.py#... > tests/test_unix_events.py:673: with unittest.mock.patch.object(self.loop, > "add_signal_handler") as self.m_add_signal_handler: > This (and some similar lines below) doesn't fit in 80 chars. will take care of that
Sign in to reply to this message.
did some refactoring The main change is the introduction of the .set_loop() method in the child watcher classes. Its purpose is to detach the watcher from its event loop and reattach it to another loop. It is no longer necessary to call get_event_loop() from get_child_watcher(). The watcher is created independently from the event loop and is attached later. watcher.set_loop() is called from policy.set_event_loop() when replacing the main thread event loop. This is the only non obvious side effect. It can be reimplemented if not desired. patches #5 and #6 are functionally identical. In #6 the DefaultEventLoopPolicy class is reimplemented in unix_events to host unix-specific code.
Sign in to reply to this message.
Are you done checking in new versions? :-) I like almost everything about this version! I have one test failure though: it seems that test_sigchld_race_condition() is run twice with different watchers, and one of those fails: FAIL: test_sigchld_race_condition (test_unix_events.FastChildWatcherTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/lib/python3.4/unittest/mock.py", line 1112, in patched return func(*args, **keywargs) File "tests/test_unix_events.py", line 984, in test_sigchld_race_condition callback.assert_called_once_with(50, 4, 1, 12) File "/usr/local/lib/python3.4/unittest/mock.py", line 758, in assert_called_once_with return self.assert_called_with(*args, **kwargs) File "/usr/local/lib/python3.4/unittest/mock.py", line 747, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: Expected call: mock(50, 4, 1, 12) Actual call: mock(50, 255, 1, 12) In fact, it fails both on OSX and Ubuntu, so I don't think it's just me. :-( PS. I have a few more style nits, which I can take care of once I commit it -- I'm less fond of \ continutation, and I like trailing commas in lists spanning multiple lines (especially lists that are mutated by a diff :-).
Sign in to reply to this message.
On 2013/11/01 19:05:22, GvR wrote: > Are you done checking in new versions? :-) > > I like almost everything about this version! > > I have one test failure though: it seems that test_sigchld_race_condition() is > run twice with different watchers, and one of those fails: > FAIL: test_sigchld_race_condition (test_unix_events.FastChildWatcherTests) yes, this was expected. I have got the failure too. Actually I don't know how to fix this race condition in plain python without using locks. The FastChildWatcher should better be implemented in C. > PS. I have a few more style nits, which I can take care of once I commit it -- > I'm less fond of \ continutation, and I like trailing commas in lists spanning > multiple lines (especially lists that are mutated by a diff :-). I don't mind ;-)
Sign in to reply to this message.
On 2013/11/01 19:27:30, ayba wrote: > On 2013/11/01 19:05:22, GvR wrote: > > Are you done checking in new versions? :-) > > > > I like almost everything about this version! > > > > I have one test failure though: it seems that test_sigchld_race_condition() is > > run twice with different watchers, and one of those fails: > > FAIL: test_sigchld_race_condition (test_unix_events.FastChildWatcherTests) > > yes, this was expected. I have got the failure too. It would be nice if you said so. I can't tell from just looking at the code if you think it's ready to go in or not. :-) > Actually I don't know how to fix this race condition in plain python without > using locks. The FastChildWatcher should better be implemented in C. Hm, I'm not sure I believe using C would help, unless you are planning to rely on the GIL. It's fundamentally a race condition between two threads sharing the watcher, right? One of these must the the main thread, which is where the signal handler runs. The scenario for the race condition is that thread A (not the main thread) spawns a child and then adds a handler for the child's pid to the watcher, but in between those two actions in thread A, the child exits *and* the main thread handles the resulting SIGCHILD. The watcher doesn't have a callback for the pid yet, so it throws away the status. Perhaps we should not throw away the status but use some kind of queue? Rather than locking, you could use the following hair-brained scheme. Give the watcher a counter of "outstanding forks". The code that forks a process should do the following dance: - tell the watcher to bump the outstanding forks counter - fork the child, get its pid - register the child handler for that pid - tell the watcher to reduce the forks counter The final step should be done in a finally clause, or we can force it using a context manager. Perhaps the API should be something like this: with watcher: <create child> pid = <get pid> watcher.add_child_handler(pid, <callback>) When the fork counter is zero (i.e. when no other thread is inside one of these with-statements), the watcher is free to ignore statuses for children it doesn't know abount; but as long as the counter is > 0, it should save those statuses. When add_child_handler() is called it first checks the saved statuses. When the counter goes down to zero it can clear the set of saved statuses. Thoughts?
Sign in to reply to this message.
I'm slightly confused: watchers are context managers already, but it looks like they don't do anything. So maybe you already have the right API, you just need to implement the correct behavior. :-) (And yes, you'd need an actual threading.Lock, but only to protect the counter when you're modifying it in __enter__ and __exit__.)
Sign in to reply to this message.
> > > Are you done checking in new versions? :-) I hope I'm finished now ;) > > Actually I don't know how to fix this race condition in plain python without > > using locks. The FastChildWatcher should better be implemented in C. > > Hm, I'm not sure I believe using C would help, unless you are planning to rely > on the GIL. yes, to ensure that __enter__ and __exit__ are executed atomically. I did not see any Lock in asyncio and supposed they were to be avoided. If locking is fine, then I won't seek complexity. :-p > It's fundamentally a race condition between two threads sharing the watcher, > right? One of these must the the main thread, which is where the signal handler > runs. The scenario for the race condition is that thread A (not the main thread) > spawns a child and then adds a handler for the child's pid to the watcher, but > in between those two actions in thread A, the child exits *and* the main thread > handles the resulting SIGCHILD. The watcher doesn't have a callback for the pid > yet, so it throws away the status. > > Perhaps we should not throw away the status but use some kind of queue? Rather I used a dict instead. Callbacks are fired as soon as their respective child is registered (they may not occur in the same order as waitpid()) > than locking, you could use the following hair-brained scheme. Give the watcher > a counter of "outstanding forks". The code that forks a process should do the > following dance: > > - tell the watcher to bump the outstanding forks counter > - fork the child, get its pid > - register the child handler for that pid > - tell the watcher to reduce the forks counter > > The final step should be done in a finally clause, or we can force it using a > context manager. Perhaps the API should be something like this: > > with watcher: > <create child> > pid = <get pid> > watcher.add_child_handler(pid, <callback>) > > When the fork counter is zero (i.e. when no other thread is inside one of these > with-statements), the watcher is free to ignore statuses for children it doesn't > know abount; but as long as the counter is > 0, it should save those statuses. > When add_child_handler() is called it first checks the saved statuses. When the > counter goes down to zero it can clear the set of saved statuses. yes this is it > I'm slightly confused: watchers are context managers already, but it looks like > they don't do anything. So maybe you already have the right API, you just need > to implement the correct behavior. :-) exactly > (And yes, you'd need an actual threading.Lock, but only to protect the counter > when you're modifying it in __enter__ and __exit__.) This is what I did finally. I think the patch should be ready now. The race condition is fixed and the coverage should be complete. Note: I stumbled accross another race condition in the creation of event loop policy (fixed in events.py, lines 390-406). It may happen when the first call to get_event_loop_policy() is done concurrently from multiple threads.
Sign in to reply to this message.
Thanks for being so thorough! Here are a whole bunch of silly nits about docstring style and sentence capitalization/periods, and a few significant suggestions/requests (the longer comments). If you want to leave the style nits alone that's fine, I can clean it up before committing. But I'd really like to see the waitpid refactoring and have a better solution for the default policy class name. https://codereview.appspot.com/16700044/diff/120001/asyncio/events.py File asyncio/events.py (right): https://codereview.appspot.com/16700044/diff/120001/asyncio/events.py#newcode322 asyncio/events.py:322: # Child processes handling (Unix only) Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/events.py#newcode390 asyncio/events.py:390: # Lock for protecting the on-the-fly creation of the event loop policy Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py File asyncio/unix_events.py (right): https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:26: 'FastChildWatcher', 'DefaultEventLoopPolicy'] Trailing comma, move ']' to next line. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:397: termination or interruption by a signal. Need to have a one-line description, blank line, then the rest. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:402: (this is needed to prevent a race condition in some implementations) Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:427: if a handler was removed sucessfully, False if no handler was set.""" Same request as above. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:457: def __init__(self, loop): Blank line before. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:480: # prevent a race condition in case a child terminated Capital, period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:494: # long as we're able to reap a child Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:509: big number of children (O(n) each time SIGCHLD is raised) Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:521: # prevent a race condition in case the child is already terminated Capital, period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:527: else (expected_pid,)): This expression looks odd. More so with the requirement in the other version that expected_pid must be -1, and with the other version looking so much like this version. I wonder if you could have two (protected) methods, one to wait for all pids, one to wait for a specific list of pids? And have the common code for decoding the status in the base class? https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:532: # (may happen if waitpid() is called elsewhere) Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:554: # after os.waitpid() returns Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:568: (O(1) each time a child terminates) Period. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:593: "caught subprocesses termination from unknown pids: %r", Capitalize. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:598: assert self._forks, "must use the context manager" Capitalize. https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... asyncio/unix_events.py:640: # it may not be registered yet Capitalize, period. https://codereview.appspot.com/16700044/diff/120001/asyncio/windows_events.py File asyncio/windows_events.py (right): https://codereview.appspot.com/16700044/diff/120001/asyncio/windows_events.py... asyncio/windows_events.py:22: 'DefaultEventLoopPolicy'] Trailing comma, move ']' to next line. https://codereview.appspot.com/16700044/diff/120001/asyncio/windows_events.py... asyncio/windows_events.py:460: class DefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy): I'm not super comfortable with having two classes with the same name defined in different platform-specific modules. (Even though we have the same pattern with SelectorEventLoop -- I'd rather we didn't use it there either.) Could you either go back to the original platform-checking code in new_event_loop(), or maybe have the classes defined with different names and aliased to DefaultEventLoopPolicy in the platform-specific import (which still must be somewhere :-)? https://codereview.appspot.com/16700044/diff/120001/tests/test_unix_events.py File tests/test_unix_events.py (right): https://codereview.appspot.com/16700044/diff/120001/tests/test_unix_events.py... tests/test_unix_events.py:693: instance = None Blank line before. And maybe not after.
Sign in to reply to this message.
On 2013/11/03 22:02:14, GvR wrote: > https://codereview.appspot.com/16700044/diff/120001/asyncio/unix_events.py#ne... > asyncio/unix_events.py:527: else (expected_pid,)): > This expression looks odd. More so with the requirement in the other version > that expected_pid must be -1, and with the other version looking so much like > this version. > > I wonder if you could have two (protected) methods, one to wait for all pids, > one to wait for a specific list of pids? And have the common code for decoding > the status in the base class? That makes sense. Plus there was a race condition (again) in the way ._do_waitpid() was called in .set_loop(). I moved the while loop into _do_waitpid_all() to fix it. > https://codereview.appspot.com/16700044/diff/120001/asyncio/windows_events.py... > asyncio/windows_events.py:460: class > DefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy): > I'm not super comfortable with having two classes with the same name defined in > different platform-specific modules. (Even though we have the same pattern with > SelectorEventLoop -- I'd rather we didn't use it there either.) Could you > either go back to the original platform-checking code in new_event_loop(), or > maybe have the classes defined with different names and aliased to > DefaultEventLoopPolicy in the platform-specific import (which still must be > somewhere :-)? Why not alias them both ? +class _WindowsDefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy): + _loop_factory = SelectorEventLoop + +SelectorEventLoop = _WindowsSelectorEventLoop +DefaultEventLoopPolicy = _WindowsDefaultEventLoopPolicy
Sign in to reply to this message.
https://codereview.appspot.com/16700044/diff/140001/asyncio/locks.py File asyncio/locks.py (right): https://codereview.appspot.com/16700044/diff/140001/asyncio/locks.py#newcode158 asyncio/locks.py:158: # TODO: add waiters:N if > 0. Whoops, looks like you accidentally rolled back the most recent commit to this file. https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py File asyncio/unix_events.py (right): https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... asyncio/unix_events.py:509: There's no need for this blank line. https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... asyncio/unix_events.py:613: logger.warning( Can you log the warning without holding the lock? You never know what I/O the logger might do. https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... asyncio/unix_events.py:623: # maybe the child is already terminated Make a sentence. https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... asyncio/unix_events.py:627: except KeyError: The logic here is confusing (which of the two lines do you expect to raise KeyError?). Wouldn't it be easier to have a test at the top of the function, like this? if pid in self._zombies: returncode = self._zombies.pop(pid) callback(pid, returncode, *args) else: self._callbacks[pid] = callback, args Also, I think you shouldn't test _forks or use _zombies without holding the lock. https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... asyncio/unix_events.py:657: logger.warning( Again, I'd like to see the warning logged without holding the lock. https://codereview.appspot.com/16700044/diff/140001/tests/test_locks.py File tests/test_locks.py (left): https://codereview.appspot.com/16700044/diff/140001/tests/test_locks.py#oldcode5 tests/test_locks.py:5: import re Again, looks like you accidentally erased the new changes in this file.
Sign in to reply to this message.
On 2013/11/04 18:22:32, GvR wrote: > https://codereview.appspot.com/16700044/diff/140001/asyncio/locks.py#newcode158 > asyncio/locks.py:158: # TODO: add waiters:N if > 0. > Whoops, looks like you accidentally rolled back the most recent commit to this > file. looks like 'upload.py --rev default:sigchld' did not do what I expected. :o) Would be nice to have something like 'git diff default...sigchld' in the tool. > https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... > asyncio/unix_events.py:613: logger.warning( > Can you log the warning without holding the lock? You never know what I/O the > logger might do. right > https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... > asyncio/unix_events.py:627: except KeyError: > The logic here is confusing (which of the two lines do you expect to raise > KeyError?). Both actually. The first line raises if the child is still alive and the second one if it is no longer registered (though I understand it would be strange to call .add_child_handler() and .remove_child_handler() in the same time from different threads, but it is technically possible). > Wouldn't it be easier to have a test at the top of the function, > like this? > > if pid in self._zombies: > returncode = self._zombies.pop(pid) > callback(pid, returncode, *args) > else: > self._callbacks[pid] = callback, args Yes In that case, locking is mandatory. > Also, I think you shouldn't test _forks or use _zombies without holding the > lock. Sure. Feel free to replace it with a critical section if you prefer ;-)
Sign in to reply to this message.
On 2013/11/04 23:05:01, ayba wrote: > On 2013/11/04 18:22:32, GvR wrote: > > > https://codereview.appspot.com/16700044/diff/140001/asyncio/locks.py#newcode158 > > asyncio/locks.py:158: # TODO: add waiters:N if > 0. > > Whoops, looks like you accidentally rolled back the most recent commit to this > > file. > > looks like 'upload.py --rev default:sigchld' did not do what I expected. :o) > Would be nice to have something like 'git diff default...sigchld' in the tool. > > > > > https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... > > asyncio/unix_events.py:613: logger.warning( > > Can you log the warning without holding the lock? You never know what I/O the > > logger might do. > > right > > > > > https://codereview.appspot.com/16700044/diff/140001/asyncio/unix_events.py#ne... > > asyncio/unix_events.py:627: except KeyError: > > The logic here is confusing (which of the two lines do you expect to raise > > KeyError?). > > Both actually. The first line raises if the child is still alive and the second > one if it is no longer registered (though I understand it would be strange to > call .add_child_handler() and .remove_child_handler() in the same time from > different threads, but it is technically possible). > > > > Wouldn't it be easier to have a test at the top of the function, > > like this? > > > > if pid in self._zombies: > > returncode = self._zombies.pop(pid) > > callback(pid, returncode, *args) > > else: > > self._callbacks[pid] = callback, args > > Yes > > In that case, locking is mandatory. > > > > Also, I think you shouldn't test _forks or use _zombies without holding the > > lock. > > Sure. Feel free to replace it with a critical section if you prefer ;-) All looks good now. I will apply this to the repo soon. Thank you so much for your quick turn-around of feedback and adequate push-back!
Sign in to reply to this message.
There are failures on the buildbots because the SIGCHLD handler is not being removed and raises an exception: $ python-release -i -m test.test_asyncio ... ---------------------------------------------------------------------- Ran 683 tests in 11.524s OK >>> import os; os.system('echo hello') hello Exception ignored when trying to write to the signal wakeup fd: Exception ignored when trying to write to the signal wakeup fd: OSError: [Errno 9] Bad file descriptor 0 >>> r, w = os.pipe() >>> import subprocess >>> subprocess.Popen(['sleep', '10']) <subprocess.Popen object at 0xb596106c> >>> os.read(r, 1) Exception ignored when trying to write to the signal wakeup fd: OSError: [Errno 9] Bad file descriptor Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/oudkerk/Repos/py-default/Lib/asyncio/unix_events.py", line 93, in _handle_signal self._add_callback_signalsafe(handle) File "/home/oudkerk/Repos/py-default/Lib/asyncio/base_events.py", line 587, in _add_callback_signalsafe self._write_to_self() File "/home/oudkerk/Repos/py-default/Lib/asyncio/selector_events.py", line 89, in _write_to_self self._csock.send(b'x') AttributeError: 'NoneType' object has no attribute 'send'
Sign in to reply to this message.
On 2013/11/06 16:13:20, sbt wrote: > There are failures on the buildbots because the SIGCHLD handler is not being > removed and raises an exception: > > $ python-release -i -m test.test_asyncio > ... > ---------------------------------------------------------------------- > Ran 683 tests in 11.524s > > OK > >>> import os; os.system('echo hello') > hello > Exception ignored when trying to write to the signal wakeup fd: > Exception ignored when trying to write to the signal wakeup fd: > OSError: [Errno 9] Bad file descriptor > 0 > >>> r, w = os.pipe() > >>> import subprocess > >>> subprocess.Popen(['sleep', '10']) > <subprocess.Popen object at 0xb596106c> > >>> os.read(r, 1) > Exception ignored when trying to write to the signal wakeup fd: > OSError: [Errno 9] Bad file descriptor > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > File "/home/oudkerk/Repos/py-default/Lib/asyncio/unix_events.py", line 93, in > _handle_signal > self._add_callback_signalsafe(handle) > File "/home/oudkerk/Repos/py-default/Lib/asyncio/base_events.py", line 587, in > _add_callback_signalsafe > self._write_to_self() > File "/home/oudkerk/Repos/py-default/Lib/asyncio/selector_events.py", line 89, > in _write_to_self > self._csock.send(b'x') > AttributeError: 'NoneType' object has no attribute 'send' Whoops. I don't have time to debug right now. Should I roll back the change until we come up with a fix? Or can it wait? (I'll have more time towards the end of today.)
Sign in to reply to this message.
> Whoops. I don't have time to debug right now. Should I roll back the change > until we come up with a fix? Or can it wait? (I'll have more time towards the > end of today.) It has been like that for 40 hours without anyone submitting a bug report, so I don't think there is a need to rush. BTW, I start work at Google in London soon:-)
Sign in to reply to this message.
Hi, the unix loop .close() method was actually removed by the patch (SIGCHLD registration is now handled by the child watcher class). - def close(self): - handler = self._signal_handlers.get(signal.SIGCHLD) - if handler is not None: - self.remove_signal_handler(signal.SIGCHLD) - super().close() In fact .close() should clean all signal handlers, not only SIGCHLD. I am sending a new patch to fix this.
Sign in to reply to this message.
On 2013/11/06 18:16:11, ayba wrote: > Hi, > the unix loop .close() method was actually removed by the patch (SIGCHLD > registration is now handled by the child watcher class). > > - def close(self): > - handler = self._signal_handlers.get(signal.SIGCHLD) > - if handler is not None: > - self.remove_signal_handler(signal.SIGCHLD) > - super().close() > > > In fact .close() should clean all signal handlers, not only SIGCHLD. I am > sending a new patch to fix this. Thanks for the quick fix. Applied and pushed. Can you watch the buildbots? PS. Since this was already committed it would have been easier (for me) if you had created a new issue with just the new patch. In fact I didn't notice the new version on this issue until 15 minutes ago. :-)
Sign in to reply to this message.
On 07/11/2013 05:28, gvanrossum@gmail.com wrote: > In fact .close() should clean all signal handlers, not only SIGCHLD. I > am >> sending a new patch to fix this. > > Thanks for the quick fix. Applied and pushed. Can you watch the > buildbots? they look all green > PS. Since this was already committed it would have been easier (for me) > if you had created a new issue with just the new patch. In fact I didn't > notice the new version on this issue until 15 minutes ago. :-) ok, next time I will know ;)
Sign in to reply to this message.
|