-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we actually run fixtures concurrently? #57
Comments
#120 is a genuine bug that is partly due to the complexity of managing concurrent fixture startup -- specifically, if one fixture setup crashes, we stop creating new ones, but we don't do anything to cancel any concurrent fixtures. And fixing it is made more difficult by the complexity of our fixture setup code. If we switched to serial setup, it might make this more maintainable. |
I don't think we had to take that route. The fix was ensuring the cancel scopes got passed through properly instead? |
Not so much passing them through properly, but keeping track of which fixtures were currently being set up, and cancelling all of them instead of just the test. |
Hi, I see a call on your website for people who would benefit from pytest fixtures that are being set up concurrently. At my company we've been using pytest to run integration tests for some of our programs. These integration tests generally look as follows:
Quite a few of these setup tasks are slow. Starting the empty database takes a couple of seconds, creating the tables, and finally polling the services takes a couple of seconds each depending on the service. Naively implemented, these startup times all add up. Where each individual thing is just a couple of seconds, it adds up to minutes. Besides, not every service needs the database, and services generally don't need other services. Right now we have a workaround that kind of looks like this: @pytest.fixture(scope="session", autouse=True)
def start_whole_environment():
with start_db() as db:
with start_x() as x:
with start_y() as y:
with start_z() as z:
x.poll()
y.poll()
z.poll()
yield (x, y, z)
x.terminate()
y.terminate()
z.terminate() The benefit of that workaround is that all services start at about the same time, and polling is about as slow as the slowest one. The downside is that it now always starts the entire environment, even if you just want to run a single test that only needs a single service. Using a solution where there is a simple fixture per service would be much nicer. So far this GitHub repo is the only thing I've found after searching specifically for running fixtures concurrently. |
Upsides:
Counterarguments:
ContextVars
(see Rework fixtures #50 (comment)) – currently fixtures all share acontextvars.Context
, so if multiple fixtures are mutating the sameContextVar
, the results will be random. (Of course, doing this is inherently nonsensical. But nonsensical-and-consistent is still better than nonsensical-and-random.)Leaving this issue open for now for discussion, and to collect any positive/negative experiences people want to report.
The text was updated successfully, but these errors were encountered: