|
|
DescriptionFix for http://bugs.python.org/issue20566 (Tulip issue 127)
Patch Set 1 : Initial proof of concept #Patch Set 2 : Fix test_as_completed_reverse_wait (by changing the desired behavior!) #Patch Set 3 : Add timeout support #Patch Set 4 : Cancel timeout when last future completes. Various cleanups, more comments. #
Total comments: 6
Patch Set 5 : Part of the new refactoring. XXX marks unresolved issues. #Patch Set 6 : Add comment explaining the idea. (WIP) #Patch Set 7 : Completed narrative, and added some finishing touches. #
Total comments: 14
Patch Set 8 : Started from scratch using a Queue. Passes all tests! #
Total comments: 6
Patch Set 9 : Add a few clarifying comments. Rename internal functions with leading underscore. #Patch Set 10 : Add comment to peculiar import. #
Total comments: 1
Patch Set 11 : Update docstring. Add another test to increase coverage. #MessagesTotal messages: 36
NOTE: Need more test coverage, esp. for the two "optimization" blocks and for these two lines in on_timeout(): if resf is None: results[i] = resf = futures.Future(loop=loop)
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode513 asyncio/tasks.py:513: if completed >= len(results) and timeout_handle is not None: Looks like a potential race condition here - an on_completion() call could occur before the timeout_handle is set.
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode513 asyncio/tasks.py:513: if completed >= len(results) and timeout_handle is not None: On 2014/02/09 14:55:06, glangford wrote: > Looks like a potential race condition here - an on_completion() call could occur > before the timeout_handle is set. No, it couldn't -- callbacks are run one at a time by the event loop, so none will run until the current block (at_completion()) returns or yields.
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode511 asyncio/tasks.py:511: resf._copy_state(f) Seems a shame to have to use a tricky "proxy future" mechanism to communicate out from the on_completion callback. In concurrent.futures, threading.Event is used in a dedicated helper class (_Waiter) for a similar purpose. _wait in asyncio uses a "waiter" Future in a slightly different way, just marking it done as a signal out from its callback. Did someone say they hate callbacks?? I do too. ;-) https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode513 asyncio/tasks.py:513: if completed >= len(results) and timeout_handle is not None: On 2014/02/09 17:51:58, GvR wrote: > On 2014/02/09 14:55:06, glangford wrote: > > Looks like a potential race condition here - an on_completion() call could > occur > > before the timeout_handle is set. > > No, it couldn't -- callbacks are run one at a time by the event loop, so none > will run until the current block (at_completion()) returns or yields. Ah ok - understood.
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode511 asyncio/tasks.py:511: resf._copy_state(f) On 2014/02/09 18:14:30, glangford wrote: > Seems a shame to have to use a tricky "proxy future" mechanism to communicate > out from the on_completion callback. In concurrent.futures, threading.Event is > used in a dedicated helper class (_Waiter) for a similar purpose. _wait in > asyncio uses a "waiter" Future in a slightly different way, just marking it done > as a signal out from its callback. Did someone say they hate callbacks?? I do > too. ;-) Not sure what you are suggesting. You have to have an object that another task can wait for, and in Tulip that object is a Future. You could use asyncio.queues.Event, but guess what -- it wraps a Future. :-) Similar, in the threading world the lowest-level object with that functionality is the mutex. I can't do it with just Tulip coroutines here -- those by themselves are less powerful (since ultimately they still have to use a Future).
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode511 asyncio/tasks.py:511: resf._copy_state(f) On 2014/02/09 18:44:41, GvR wrote: > On 2014/02/09 18:14:30, glangford wrote: > > Seems a shame to have to use a tricky "proxy future" mechanism to communicate > > out from the on_completion callback. In concurrent.futures, threading.Event is > > used in a dedicated helper class (_Waiter) for a similar purpose. _wait in > > asyncio uses a "waiter" Future in a slightly different way, just marking it > done > > as a signal out from its callback. Did someone say they hate callbacks?? I do > > too. ;-) > > Not sure what you are suggesting. You have to have an object that another task > can wait for, and in Tulip that object is a Future. You could use > asyncio.queues.Event, but guess what -- it wraps a Future. :-) > > Similar, in the threading world the lowest-level object with that functionality > is the mutex. > > I can't do it with just Tulip coroutines here -- those by themselves are less > powerful (since ultimately they still have to use a Future). Understood. Would using asyncio Event make the control flow clearer? Or is a _ProxyFuture class of general use? Just thinking out loud. At any rate, the inline comments referring to the distinction between original futures and helper futures are really essential for subsequent maintainers.
Sign in to reply to this message.
On 2014/02/09 19:13:21, glangford wrote: > https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py > File asyncio/tasks.py (right): > > https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py#newcode511 > asyncio/tasks.py:511: resf._copy_state(f) > On 2014/02/09 18:44:41, GvR wrote: > > On 2014/02/09 18:14:30, glangford wrote: > > > Seems a shame to have to use a tricky "proxy future" mechanism to > communicate > > > out from the on_completion callback. In concurrent.futures, threading.Event > is > > > used in a dedicated helper class (_Waiter) for a similar purpose. _wait in > > > asyncio uses a "waiter" Future in a slightly different way, just marking it > > done > > > as a signal out from its callback. Did someone say they hate callbacks?? I > do > > > too. ;-) > > > > Not sure what you are suggesting. You have to have an object that another task > > can wait for, and in Tulip that object is a Future. You could use > > asyncio.queues.Event, but guess what -- it wraps a Future. :-) > > > > Similar, in the threading world the lowest-level object with that > functionality > > is the mutex. > > > > I can't do it with just Tulip coroutines here -- those by themselves are less > > powerful (since ultimately they still have to use a Future). > > Understood. Would using asyncio Event make the control flow clearer? Or is a > _ProxyFuture class of general use? Just thinking out loud. At any rate, the > inline comments referring to the distinction between original futures and helper > futures are really essential for subsequent maintainers. Alternately - does asyncio Queue help to hide some of the complexity here? I'm thinking the callback could put() on the queue, the code at the end could get() and yield. Haven't thought it through, but perhaps the need for helper futures and indexed slots into the completed list goes away. (?)
Sign in to reply to this message.
Well, Queue uses Futures internally... But wait, I have another refactoring in mind. On Feb 9, 2014 11:56 AM, <glenn.langford@gmail.com> wrote: > On 2014/02/09 19:13:21, glangford wrote: > >> https://codereview.appspot.com/61210043/diff/60001/asyncio/tasks.py >> File asyncio/tasks.py (right): >> > > > https://codereview.appspot.com/61210043/diff/60001/ > asyncio/tasks.py#newcode511 > >> asyncio/tasks.py:511: resf._copy_state(f) >> On 2014/02/09 18:44:41, GvR wrote: >> > On 2014/02/09 18:14:30, glangford wrote: >> > > Seems a shame to have to use a tricky "proxy future" mechanism to >> communicate >> > > out from the on_completion callback. In concurrent.futures, >> > threading.Event > >> is >> > > used in a dedicated helper class (_Waiter) for a similar purpose. >> > _wait in > >> > > asyncio uses a "waiter" Future in a slightly different way, just >> > marking it > >> > done >> > > as a signal out from its callback. Did someone say they hate >> > callbacks?? I > >> do >> > > too. ;-) >> > >> > Not sure what you are suggesting. You have to have an object that >> > another task > >> > can wait for, and in Tulip that object is a Future. You could use >> > asyncio.queues.Event, but guess what -- it wraps a Future. :-) >> > >> > Similar, in the threading world the lowest-level object with that >> functionality >> > is the mutex. >> > >> > I can't do it with just Tulip coroutines here -- those by themselves >> > are less > >> > powerful (since ultimately they still have to use a Future). >> > > Understood. Would using asyncio Event make the control flow clearer? >> > Or is a > >> _ProxyFuture class of general use? Just thinking out loud. At any >> > rate, the > >> inline comments referring to the distinction between original futures >> > and helper > >> futures are really essential for subsequent maintainers. >> > > Alternately - does asyncio Queue help to hide some of the complexity > here? I'm thinking the callback could put() on the queue, the code at > the end could get() and yield. Haven't thought it through, but perhaps > the need for helper futures and indexed slots into the completed list > goes away. (?) > > https://codereview.appspot.com/61210043/ >
Sign in to reply to this message.
Glenn, I did the promised refactoring and added a huge narrative (perhaps excessive :-). I'd love your feedback. I still need to bring coverage up to 100%, and there are a few issues I'm not sure about. Especially the f.exception() calls before clearing 'behind' to avoid warnings -- it was the only way to get rid of such a warning in test_as_completed_with_timeout().
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode361 asyncio/tasks.py:361: assert not isinstance(fs, futures.Future) and not iscoroutine(fs) Oops, this is not part of the current patch, it's an experiment in response to a different thread.
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode530 asyncio/tasks.py:530: # yielded already). Because we always pop an output Future off The model is clear. I think the comments really help to sketch the early yield+callback+timeout admin+later yield machinery. My sense is that longer comments are warranted in this case, given the complexity. https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode598 asyncio/tasks.py:598: behind = collections.deque() # Completed input Futures. I like the twin deques! I would probably just change the comments from "Pending output Futures" to not use "output" since that word has so many meanings. Helper or Proxy or some such special term, that has a set definition: "A <helper|proxy> future represents an original Future which has not completed. <Helper|proxy> futures that have been yielded are tracked on the ahead queue." (for example) BTW, this is the naive single Queue model I was originally thinking of - I don't know about the legitimacy of yielding from the Queue directly which can block, but this was my mental picture (trying not to use helper futures): done = queues.Queue() def on_completion(f): done.put_nowait(f) ... followed later by # Produce remaining futures for _ in range(len(todo)): yield from done.get() https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode607 asyncio/tasks.py:607: resf._copy_state(f) Subtlety - so the helper future was popped from ahead, and will graduate to the caller since they are presumably running "yield from f" at this time. Ok. https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode608 asyncio/tasks.py:608: elif behind is not None: Spider senses tingling about this test, and the setting of behind to None down at the bottom. Is this saying that the callback could be triggered even though the code at the bottom thinks we are done? How is that possible? This is kind of a mysterious side effect. https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode610 asyncio/tasks.py:610: if behind is None and not ahead: Scratching my head about this block. Is this a valid condition to be removing callbacks? Is it desirable to be removing callbacks inside the callback itself? https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode631 asyncio/tasks.py:631: f.remove_done_callback(on_completion) Is it necessary to remove done callbacks on timeout as well? Would be nice to do it in just one place. https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode660 asyncio/tasks.py:660: f.remove_done_callback(on_completion) Why only conditionally remove callbacks here? Why not just remove_done_callback() to mirror how callbacks were added?
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode613 asyncio/tasks.py:613: todo.clear() Why clear todo within the callback? Is this just a signal to the "Produce remaining futures" code to not try to remove callbacks a second time? https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode656 asyncio/tasks.py:656: f.exception() # Avoid warnings about unretrieved exceptions. Mmmm...I see the general intent but it looks funky on first reading. Is consume = f.exception() more readable, or something like that?
Sign in to reply to this message.
https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode620 asyncio/tasks.py:620: f.add_done_callback(on_completion) <...some time later...> The "callback cleanup" effort required is painful. I wonder if some of this responsibility should be pushed back into asyncio.Future. For example, could Future._callbacks be changed from a list to a WeakSet, so that the rest of the world isn't obligated to perform callback clean up? https://codereview.appspot.com/61210043/diff/120001/asyncio/tasks.py#newcode635 asyncio/tasks.py:635: behind.append(resf) Does this have to be multiple "timed out dummy Futures"? If the caller to as_completed() is waiting on 10 Futures, 5 of which time out, do they need to see 5 TimeoutError exceptions? I think the previous behaviour was just 1.
Sign in to reply to this message.
You're asking lots of good questions. I will have answers, but I need to focus on some other stuff the rest of today. Please hang in there, your code review is extremely valuable.
Sign in to reply to this message.
No worries, I am just looking at it in pieces as I find time during the day. https://codereview.appspot.com/61210043/diff/120001/tests/test_tasks.py File tests/test_tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/tests/test_tasks.py#newco... tests/test_tasks.py:789: it = iter(asyncio.as_completed([a, b], timeout=0.12, loop=loop)) One idea for a new test is to create a list of futures with unique sleep times [0, 0.1, 0.2, 0.3, ... ] then use itertools.permutations to shuffle the order they are given to as_completed(). In all permutations, the futures should be returned in ascending order (sorted by sleep time). This will exercise the different paths in queuing. A sleep time of zero gets a quick yield, exercising that optimization. Something like: @asyncio.coroutine def sleeper(t): yield from asyncio.sleep(t) return t taskTimes = [0, 0.1, 0.2, 0.3, 0.4] # tighter spacing to reduce total run time? for times in itertools.permutations(taskTimes): fs = [ asyncio.async(sleeper(t) ... ) for t in times ] # followed by building a list of values obtained from as_completed(fs), and assert that the list == taskTimes As a post condition, there could be an assert for each future that the callbacks have been cleaned up (if needed).
Sign in to reply to this message.
Ok, that's all for now from me - I will jump back in when I get a chance. https://codereview.appspot.com/61210043/diff/120001/tests/test_tasks.py File tests/test_tasks.py (right): https://codereview.appspot.com/61210043/diff/120001/tests/test_tasks.py#newco... tests/test_tasks.py:790: f = next(it) <continuing from previous idea> A more complex setup that might be good is a fuzz test, using random delays with a defined range, and duplicate values. For example, a list of random sleep times between 0.1 and 0.5 could be created. Then, some number of those delays are duplicated (making the queues longer). As before, after the Futures are scheduled the sleep times should be yielded back in ascending order. This can run against wait() as well. That test should probably run in high volume over many hours and permutations, so it might not be part of the usual test framework.
Sign in to reply to this message.
Let me just say that if randomized tests are necessary to give us confidence in the code, the code is too complex. (If the amount of explanation needed wasn't enough of a hint about that. :-)
Sign in to reply to this message.
On 2014/02/10 22:27:45, GvR wrote: > Let me just say that if randomized tests are necessary to give us confidence in > the code, the code is too complex. (If the amount of explanation needed wasn't > enough of a hint about that. :-) Fair enough! Maybe I was more confident in code when I was younger. Now, I will take the randomized tests every time. :-)
Sign in to reply to this message.
Your Queue idea is cool! Have a look at this code: def as_completed(fs, *, loop=None, timeout=None): loop = loop if loop is not None else events.get_event_loop() deadline = None if timeout is None else loop.time() + timeout todo = {async(f, loop=loop) for f in set(fs)} assert timeout is None from .queues import Queue done = Queue(loop=loop) def on_completion(f): done.put_nowait(f) for f in todo: f.add_done_callback(on_completion) @coroutine def helper(): f = yield from done.get() return (yield from f) for _ in range(len(todo)): yield helper() This passes all original as_completed() tests except for test_as_completed_with_timeout(), and I guess we can make that work too. (It also preserves the weird semantics tested by test_as_completed_reverse_wait().) We can even make the "shortcut" optimization by adding this to helper() after the first line: if f.done(): return f.result() I just hope that using a Queue doesn't reintroduce the O(N**2) issue that led us here -- the Queue implementation uses Futures internally I haven't tried to analyze it yet. And remember that the most basic version of the solution in this code review (shown near the beginning of the long comment section) is even shorter than the Queue-based version. :-)
Sign in to reply to this message.
Adding some annotations in the comments. https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode464 asyncio/tasks.py:464: """Return an iterator whose values, when waited for, are Futures (or coroutines). Actually they're always coroutines. :-) https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode480 asyncio/tasks.py:480: from .queues import Queue This import deserves a comment; if I write "from . import queues" at the top, as I'd like to, nothing can be imported due to a circular import dependency. https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode492 asyncio/tasks.py:492: return # on_timeout() was here first. It would *seem* this is unreachable because on_timeout() removes the on_completion callback, but actually it's possible that a Future is already complete, but its on_completion hasn't run yet -- it's scheduled through call_soon() though, and we don't have its handle, so we can't cancel it. (Would be a nice feature to add this though, the race condition is pretty nasty.) https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode507 asyncio/tasks.py:507: if todo and timeout is not None: Subtle: if todo is empty we shouldn't bother with the timeout.
Sign in to reply to this message.
Nice. https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py File asyncio/tasks.py (right): https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode480 asyncio/tasks.py:480: from .queues import Queue On 2014/02/11 17:21:40, GvR wrote: > This import deserves a comment; if I write "from . import queues" at the top, as > I'd like to, nothing can be imported due to a circular import dependency. Yes, I ran into that problem!! https://codereview.appspot.com/61210043/diff/140001/asyncio/tasks.py#newcode487 asyncio/tasks.py:487: done.put_nowait(None) Ah, so None is a sentinel that is picked up by wait_for_one(). Ok.
Sign in to reply to this message.
Add a few clarifying comments. Rename internal functions with leading underscore.
Sign in to reply to this message.
Add comment to peculiar import.
Sign in to reply to this message.
These tests all pass. Without the patch to use a Queue, pass 3 fails. https://codereview.appspot.com/61210043/diff/180001/tests/test_tasks.py File tests/test_tasks.py (right): https://codereview.appspot.com/61210043/diff/180001/tests/test_tasks.py#newco... tests/test_tasks.py:1631: if __name__ == '__main__': fuzz_as_completed.py #!/usr/bin/env python3 import asyncio import itertools import random import sys @asyncio.coroutine def sleeper(time): yield from asyncio.sleep(time) return time @asyncio.coroutine def watcher(tasks,delay=False): res = [] for t in asyncio.as_completed(tasks): r = yield from t res.append(r) if delay: process_time = random.random() / 10 yield from asyncio.sleep( process_time ) # simulate processing delay #print(res) #assert(sorted(res) == res) if sorted(res) != res: print('FAIL', res) print('------------') else: print('.', end='') sys.stdout.flush() loop = asyncio.get_event_loop() print('Pass 1') # All permutations of discrete task running times must be returned # by as_completed in the correct order. task_times = [0, 0.1, 0.2, 0.3, 0.4 ] # 120 permutations for times in itertools.permutations(task_times): tasks = [ asyncio.Task(sleeper(t)) for t in times ] loop.run_until_complete(asyncio.Task(watcher(tasks))) print() print('Pass 2') # Longer task times, with randomized duplicates. 100 tasks each time. longer_task_times = [x/10 for x in range(30)] for i in range(20): task_times = longer_task_times * 10 random.shuffle(task_times) #print('Times', task_times[:500]) tasks = [ asyncio.Task(sleeper(t)) for t in task_times[:100] ] loop.run_until_complete(asyncio.Task(watcher(tasks))) print() print('Pass 3') # Same as pass 2, but with a random processing delay (0 - 0.1s) after # retrieving each future from as_completed and 200 tasks. This tests whether # the order that callbacks are triggered is preserved through to the # as_completed caller. for i in range(20): task_times = longer_task_times * 10 random.shuffle(task_times) #print('Times', task_times[:200]) tasks = [ asyncio.Task(sleeper(t)) for t in task_times[:200] ] loop.run_until_complete(asyncio.Task(watcher(tasks, delay=True))) print() loop.close()
Sign in to reply to this message.
Nice fuzz test. Perhaps you can contribute it to the examples directory? We don't really have another place to put things like that. A Makefile entry to run it would be a nice reminder. (Are you a core committer or have you otherwise signed the PSF contributor form?)
Sign in to reply to this message.
Oh, and is there anything else you think I should do before committing this code? (Oh, besides writing unit tests until it hurts. :-)
Sign in to reply to this message.
On 2014/02/11 21:27:15, GvR wrote: > Nice fuzz test. Perhaps you can contribute it to the examples directory? We > don't really have another place to put things like that. A Makefile entry to run > it would be a nice reminder. (Are you a core committer or have you otherwise > signed the PSF contributor form?) I signed the contributor form just a few weeks ago, but I am not a core committer. Happy for it to go into examples if you think it is helpful.
Sign in to reply to this message.
On 2014/02/11 21:27:52, GvR wrote: > Oh, and is there anything else you think I should do before committing this > code? (Oh, besides writing unit tests until it hurts. :-) I can't think of additional things to do before commit! The Queue implementation looks lean and I don't see performance issues. If we were relentlessly optimizing, put_nowait() can be tweaked in a subclass by passing an item straight to the getter (per the inline comment). But I don't think it's necessary.
Sign in to reply to this message.
Update docstring. Add another test to increase coverage.
Sign in to reply to this message.
New version (just docstring updates and a new test). I'll push this tomorrow if nobody screams.
Sign in to reply to this message.
On 2014/02/12 04:30:12, GvR wrote: > New version (just docstring updates and a new test). > > I'll push this tomorrow if nobody screams. fyi, asyncio.wait(return_when=ALL_COMPLETED) also passes the fuzz test as expected.
Sign in to reply to this message.
Pushed. I also committed your fuzz test to the examples directory. Thanks so much for reporting this issue and guiding my towards the right fix!
Sign in to reply to this message.
On 2014/02/13 02:03:39, GvR wrote: > Pushed. I also committed your fuzz test to the examples directory. Thanks so > much for reporting this issue and guiding my towards the right fix! Great news. Happy to help!
Sign in to reply to this message.
Message was sent while issue was closed.
On 2014/02/13 03:11:16, glangford wrote: > On 2014/02/13 02:03:39, GvR wrote: > > Pushed. I also committed your fuzz test to the examples directory. Thanks so > > much for reporting this issue and guiding my towards the right fix! > > Great news. Happy to help! BTW did you say you had a fuzz test for wait() as well?
Sign in to reply to this message.
Message was sent while issue was closed.
On 2014/02/13 03:17:40, GvR wrote: > On 2014/02/13 03:11:16, glangford wrote: > > On 2014/02/13 02:03:39, GvR wrote: > > > Pushed. I also committed your fuzz test to the examples directory. Thanks so > > > much for reporting this issue and guiding my towards the right fix! > > > > Great news. Happy to help! > > BTW did you say you had a fuzz test for wait() as well? I did say that...but as I am reviewing it more carefully now, I realize that it doesn't test what I thought it did. Because wait returns done and pending *sets*, the test driver doesn't have visibility into the return order. So the wait test is not in shape for the examples directory.
Sign in to reply to this message.
|