-
-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests have flakey coverage? #88
Comments
Thank you for pointing this out, although I'm not sure how to interpret the coverage report. It says "uncov" on the side but the lines are green? Anyway, you are right that the coverage of these lines depends on a race. Here's a build where the 3.6 tests covered these lines but the 3.7 lines did not. There are other builds where both 3.6 and 3.7 cover. |
Here's an example where
I checked to see if there's any other difference in coverage between these two runs:
Notice that when the
So the race seems to be related to sending at the same time a connection is closed. Sometimes it raise in line 604, and sometimes it raises inside |
I messed around with this a bit on its own, and also in conjunction with implementing timeouts. I have a few ideas about how to improve this situation:
|
trio.testing may help https://trio.readthedocs.io/en/latest/reference-testing.html#inter-task-ordering Essentially the same as using events. |
This is what I usually do... |
e.g. coverage tool reports regression on unrelated change:
https://coveralls.io/builds/20028763/source?filename=trio_websocket%2F_impl.py#L842
it's the
except
clause in_write_pending
if there are timing issues in the tests, that could explain intermittent errors like #68
The text was updated successfully, but these errors were encountered: