Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add window states API #2473

Open
wants to merge 287 commits into
base: main
Choose a base branch
from
Open

Add window states API #2473

wants to merge 287 commits into from

Conversation

proneon267
Copy link
Contributor

@proneon267 proneon267 commented Apr 2, 2024

Fixes #1857

Design discussion: #1884

The following are the API changes:

On the interface:

toga.constants.WindowState
WindowState.NORMAL
WindowState.MAXIMIZED
WindowState.MINIMIZED
WindowState.FULLSCREEN
WindowState.PRESENTATION
toga.Window
Added toga.Window.state(getter)
toga.Window.state(setter)
Deprecated toga.Window.full_screen(getter)
toga.Window.full_screen(setter)
toga.App
Added toga.App.is_in_presentation_mode(getter)
toga.App.enter_presentation_mode()
toga.App.exit_presentation_mode()
Deprecated toga.App.is_full_screen(getter)
toga.App.set_full_screen()
toga.App.exit_full_screen()

On the backend:

toga.Window
Added get_window_state()
set_window_state()
toga.App
Added enter_presentation_mode()
exit_presentation_mode()

However, I did encounter some issues, which I have put as inline comments. I do have some other questions about rubicon for implementing the new API, but I will post them later on.

TODO: Write and modify documentation, fix issues with tests, implement for the other remaining platforms, etc.

PR Checklist:

  • All new features have been tested
  • All new features have been documented
  • I have read the CONTRIBUTING.md file
  • I will abide by the code of conduct

@proneon267
Copy link
Contributor Author

@freakboy3742 Can you also review this, when you are free?

@freakboy3742
Copy link
Member

Unless there's a design or implementation issue that requires feedback, I generally don't review code until it's passing CI - this is currently failing coverage and Android tests, and I'm going to guess iOS tests as well (although #2476 is likely masking that problem).

Is there a specific design or implementation issue where you're seeking feedback prior to getting the tests to pass?

@proneon267
Copy link
Contributor Author

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

@freakboy3742
Copy link
Member

I needed some guide regarding the Android testbed, specifically test_app.py::test_full_screen and test_presentation_mode. I explained the issue with inline comments on test_app.py.

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

@proneon267
Copy link
Contributor Author

Well - I guess my first question is why is there anything to test at all? Mobile apps don't have any concept of "full screen" or "maximized"... even the representation of "window" is pretty loose. What behaviour are you implementing here?

Mostly Fullscreen and presentation mode where navigation bar, title bar& menu bars remain hidden.

Beyond that - if a test passes in isolation, but not as part of the suite, that usually indicates that one of the previous tests has left the test suite in an inconsistent state. A common example in the window tests is when a test leaves an open window (sometimes due to a test failing); subsequent checks of the window count then fail.

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing. I thought maybe adding a delay to the redraw method would work, but it doesn't. The tests only pass when I set the window state directly on the window object or run the test in isolation.

I have also tried to identify any problems with the app interface but didn't find any. The same test logic is run in test_window, but there it works properly. The app implementation of presentation mode calls the window implementation of presentation mode. So, the behaviour should be identical.

@proneon267
Copy link
Contributor Author

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

@freakboy3742
Copy link
Member

@freakboy3742 Also could you also please take a quick peek at the mobile platforms tests on testbed of test_app.py::test_full_screen and test_presentation mode. Their implementation is identical to that of test_window.py::test_presentation_state, since the app APIs call into the window API endpoints.

I have, and I've already told you the general class of problem you're hitting. You've even hinted at the problem yourself:

Yes, I realize that, in this case, the test seems to fail as they don't exit the presentation mode after testing.

If a single test isn't leaving the app in the same state that it found it... well, that's the source of your problem. That's what you need to fix.

As for "the implementation is identical"... well, the golden rule of computers applies: if in doubt, the computer is right. The computer doesn't have opinions. It runs the code it has. If you think something is identical, but testing is producing inconsistent results... something about your implementation isn't identical.

You're the one proposing this PR; the onus is on you to solve problems with the implementation. If you've got a specific question about the way Toga or the testbed operates, then I can do what I can to provide an explanation, but I can only dedicate time to resolving an open ended "why doesn't it work?" questions when the problem is on my personal todo list - which usually means it's something on Toga's roadmap, or it's the source of a bug that is preventing people from using the published version of Toga.

@proneon267
Copy link
Contributor Author

proneon267 commented May 23, 2024

The coverage report complains about:

Name              Stmts   Miss Branch BrPart  Cover   Missing
-------------------------------------------------------------
src/toga/app.py     [35](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:36)7      2     82      2  99.1%   846, 852
-------------------------------------------------------------
TOTAL              5067      2   12[38](https://github.com/beeware/toga/actions/runs/9203102559/job/25314158711#step:6:39)      2  99.9%

toga/core/src/toga/app.py

Lines 846 to 849 in 06b7f2a

@property
def is_in_presentation_mode(self) -> bool:
"""Is the app currently in presentation mode?"""
return any(window.state == WindowState.PRESENTATION for window in self.windows)

So, Line 846 is just @property. I first thought maybe the is_in_presentation_mode property is not being invoked in the test, but it is invoked several times in the tests:
assert not app.is_in_presentation_mode

assert app.is_in_presentation_mode

Next, it reports missing coverage for Line 852:

toga/core/src/toga/app.py

Lines 851 to 854 in 06b7f2a

def enter_presentation_mode(
self,
window_list_or_screen_window_dict: list[Window] | dict[Screen, Window],
) -> None:

So, line 852 is just self,. Here also, `enter_presentation_mode() is invoked serveral times during the tests:

app.enter_presentation_mode({app.screens[0]: window1})

app.enter_presentation_mode([window1, window2])

So, both of these reported missing coverage don't make much sense to me.

Also, the tests expectedly fail on python 3.13 due to Pillow.

@freakboy3742
Copy link
Member

I can't work out what's going on with the coverage report in CI (perhaps a state cache somewhere?); but if I run the tests locally (tox -m test310), the lines in app.py that report as uncovered are 836 and 842, which corresponds nicely to to return conditions in set_full_screen():

toga/core/src/toga/app.py

Lines 830 to 843 in 06b7f2a

# DeprecationWarning,
# stacklevel=2,
# )
if self.windows is not None:
self.exit_full_screen()
if windows is None:
return
screen_window_dict = dict()
for window, screen in zip(windows, self.screens):
screen_window_dict[screen] = window
self.enter_presentation_mode(screen_window_dict)
else:
warn("App doesn't have any windows")

Copy link
Member

@freakboy3742 freakboy3742 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - I think we're down to one last issue with the intermediate state testbed tests. I'm not surprised these intermediate tests are necessary to exercise one (or more) specific intermediate transitions - it's just not clear why that is the case. I strongly suspect the fix is to reduce the test cases to a move between three specific transitions, include all the "triples" that are problematic, and then document the fact on the test that ideally we'd test all intermediate transitions, but that would be computationally prohibitive.

@pytest.mark.parametrize(
"intermediate_states",
[
(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok... but again... I'm not clear on what this is testing. It's now a parameterised test with 1 parameterised value... and it's not clear at all what case it's testing, or why that specific case needs to be tested.

Each of the individual transitions should be tested as part of the previous test; if there's a need to test a specific transition through an intermediate state, then that sequence should be tested (and it should be documented why it's an issue).

Ideally, we'd test all possible intermediate states; however, I'll grant that 125 test cases is excessive here, so in the name of practicality, it's acceptable to cut to the subset that we believe to be an issue... but it should be clear which "set of three" is the problem.

(And, to be clear - the same issue applies to the desktop intermediate states test)

@freakboy3742
Copy link
Member

I've also found an interesting edge case in manual testing: On GTK, going FULLSCREEN->PRESENTATION->NORMAL results in the restored "normal" size of the window to be the full screen size. I'm guessing this might have something to do with the window not going through an intermediate NORMAL transition between FULLSCREEN and PRESENTATION, resulting in the window's "natural" size being modified.

@proneon267
Copy link
Contributor Author

I've also found an interesting edge case in manual testing: On GTK, going FULLSCREEN->PRESENTATION->NORMAL results in the restored "normal" size of the window to be the full screen size. I'm guessing this might have something to do with the window not going through an intermediate NORMAL transition between FULLSCREEN and PRESENTATION, resulting in the window's "natural" size being modified.

I tried to reporduce this on x11 both through the examples/window app and manually with:

def do_state_switching(self, widget, **kwargs):
    self.main_window.state = WindowState.FULLSCREEN
    self.main_window.state = WindowState.PRESENTATION
    self.main_window.state = WindowState.NORMAL

But I could not reproduce the behavior you are describing. Could you confirm, if you did the manual testing in a wayland environment?

@freakboy3742
Copy link
Member

But I could not reproduce the behavior you are describing. Could you confirm, if you did the manual testing in a wayland environment?

Yes - this was under Wayland. I recreated the test under X, I don't see the problem (and, FWIW, under X, it visually looked like there was a "reduce to NORMAL" step in between the FULLSCREEN and PRESENTATION).

@proneon267
Copy link
Contributor Author

proneon267 commented Nov 7, 2024

I did some tests and it seems, Gtk on Wayland just forgets about the window's original size and position, when the window is restored from any other states.

So, I have created a fix accordingly. But since Wayland doesn't allow moving of windows, so I cannot do anything about the forgotten position when restored to NORMAL state from any other state.

EDIT: That didn't work.

@proneon267
Copy link
Contributor Author

Adding a slight delay before switching on wayland worked :D

# state switching.
if IS_WAYLAND: # pragma: no-cover-if-linux-x
GLib.timeout_add(
10, partial(self._apply_state, WindowState.NORMAL)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding "magic numbers" is borderline behavior in a test; but in production code, it's really not a good idea - who is to say that 10ms is enough? It's inherently going to be machine dependent; and as machines change over time, it's unclear how performance changes will impact the chosen "magic" number.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I will try to find any Gtk callback event that would be triggered on completion of Gtk side processing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to use Gtk.events_pending() and Gtk.main_iteration_do(), but it still didn't work:

diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..1c4407f8a 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,20 +79,16 @@ class Window:
                     # Add slight delay to prevent glitching  on wayland during rapid
                     # state switching.
                     if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                        GLib.timeout_add(
-                            10, partial(self._apply_state, WindowState.NORMAL)
-                        )
-                    else:  # pragma: no-cover-if-linux-wayland
-                        self._apply_state(WindowState.NORMAL)
+                        while Gtk.events_pending():
+                            Gtk.main_iteration_do(False)
+                    self._apply_state(WindowState.NORMAL)
                 else:
                     self._pending_state_transition = None
             else:
                 if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                    GLib.timeout_add(
-                        10, partial(self._apply_state, self._pending_state_transition)
-                    )
-                else:  # pragma: no-cover-if-linux-wayland
-                    self._apply_state(self._pending_state_transition)
+                    while Gtk.events_pending():
+                        Gtk.main_iteration_do(False)
+                self._apply_state(self._pending_state_transition)
 
     def gtk_delete_event(self, widget, data):
         # Return value of the GTK on_close handler indicates whether the event has been
         ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also tried Gtk.idle_add() and it also didn't work:

diff --git a/gtk/src/toga_gtk/window.py b/gtk/src/toga_gtk/window.py
index f2a0e1448..dea71f108 100644
--- a/gtk/src/toga_gtk/window.py
+++ b/gtk/src/toga_gtk/window.py
@@ -79,17 +79,15 @@ class Window:
                     # Add slight delay to prevent glitching  on wayland during rapid
                     # state switching.
                     if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                        GLib.timeout_add(
-                            10, partial(self._apply_state, WindowState.NORMAL)
-                        )
+                        GLib.idle_add(partial(self._apply_state, WindowState.NORMAL))
                     else:  # pragma: no-cover-if-linux-wayland
                         self._apply_state(WindowState.NORMAL)
                 else:
                     self._pending_state_transition = None
             else:
                 if IS_WAYLAND:  # pragma: no-cover-if-linux-x
-                    GLib.timeout_add(
-                        10, partial(self._apply_state, self._pending_state_transition)
+                    GLib.idle_add(
+                        partial(self._apply_state, self._pending_state_transition)
                     )
                 else:  # pragma: no-cover-if-linux-wayland
                     self._apply_state(self._pending_state_transition)
                     ```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also tested with the following signals, and still they do not work consistently:

  • map-event
  • realize
  • draw
  • configure-event
  • state-flags-changed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to find any reliable alternative to the timeout method, but unfortunately, none of them work consistently and properly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm... I don't like it, but I also don't have any better suggestions. I guess I can live with it as long as we add an inline comment about why the delay is there.

[
# These sets of intermediate states are specifically chosen to trigger cases that
# will cause test failures if the implementation is incorrect on certain backends,
# such as macOS.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've now removed the mobile versions of these intermediate tests - but it's not clear to me why we need 6 states to test an intermediate transition. An intermediate state transition involves 3 states - the start state, the end state, and an intermediate state that needs to be honoured in some way. As I've mentioned in a couple of comments, the "full" test suite would be 125 "3 state" transitions, but that would be prohibitively complex. Why do we need chains of 6 states here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The need for the chain of 6 states is to trigger the logic for handling of newly requested states when a previous state is pending for application.

Currently, the test logic is like:

  • Assign initial window state to window
    |
  • Wait for window
    |
  • Assert window state is initial state
    |
  • Assign chain of 6 window states
    |
  • Assign the final window state
    |
  • Wait for window
    |
  • Assert window state is final state

For example:

  • Initially =>pending_state = None
  • FULLSCREEN is requested, but pending for application => pending_state = FULLSCREEN, since the window state was NORMAL
  • MINIMIZED is requested, but FULLSCREEN is not applied => pending_state = MINIMIZED
  • PRESENTATION is requested, but FULLSCREEN is not applied => pending_state = PRESENTATION
  • Now, FULLSCREEN is applied, and it checks for pending_state i.e., PRESENTATION
  • Since, current_state(FULLSCREEN) != pending_state(PRESENTATION), so it applies NORMAL
  • When NORMAL state is reached, it then applies pending_state(PRESENTATION)

Essentially, it triggers the code logic branch: "Request to apply new state when there is a pending_state"

If there were only 1 or 2 intermediate states, then the code branches for "applying new window state when there is a pending state would not be triggered".

For example,

@objc_method
def windowDidEnterFullScreen_(self, notification) -> None:
if (
self.impl._pending_state_transition
and self.impl._pending_state_transition != WindowState.FULLSCREEN
):
# Directly exiting fullscreen without a delay will result in error:
# ````2024-08-09 15:46:39.050 python[2646:37395] not in fullscreen state````
# and any subsequent window state calls to the OS will not work or will be glitchy.
self.performSelector(
SEL("delayedFullScreenExit:"), withObject=None, afterDelay=0
)
else:
self.impl._pending_state_transition = None

In the above, the 6 chain intermediate states trigger the branch:
if (
self.impl._pending_state_transition
and self.impl._pending_state_transition != WindowState.FULLSCREEN
):
# Directly exiting fullscreen without a delay will result in error:
# ````2024-08-09 15:46:39.050 python[2646:37395] not in fullscreen state````
# and any subsequent window state calls to the OS will not work or will be glitchy.
self.performSelector(
SEL("delayedFullScreenExit:"), withObject=None, afterDelay=0
)

But if there were only 1 or 2 intermediate states then it would only trigger the following branch:
else:
self.impl._pending_state_transition = None

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that these are the lines of code you are trying to cover, and that sequence of six transitions is one mechanism by which you can cover these lines of code. The point I'm trying to make (and this is a recurring theme in reviews of your code) is that the purpose of this exercise is not to make the test suite stop complaining about coverage. It's to ensure that the code is completely tested, and - and this is the important part - that it's obvious why the code is completely tested.

A test should be as simple as possible, as atomic as possible. This sometimes requires that there are multiple tests. The simplicity and atomicity is an essential part of the process, because it's part of making future bugs easy to identify.

As an example - let's say a future developer decides the performSelector call on entering fullscreen is clearly redundant, and "simplifies" the code. This test suite will report the failure, but it won't be clear why the test has failed. A sequence of 6 state changes, many of which aren't FULLSCREEN, will suddenly stop working. This will then require a complex debugging process to work out why that specific test is failing.

However, if the test is a sequence of 3 that clearly involves entering FULLSCREEN mode, it will be a lot more clear where the problem lies.

  • Initially =>pending_state = None
  • FULLSCREEN is requested, but pending for application => pending_state = FULLSCREEN, since the window state was NORMAL
  • MINIMIZED is requested, but FULLSCREEN is not applied => pending_state = MINIMIZED
  • PRESENTATION is requested, but FULLSCREEN is not applied => pending_state = PRESENTATION

My core point - this part of the sequence is already describing two distinct intermediate tests:

  • requesting MINIMIZED when FULLSCREEN has not been applied
  • requesting PRESENTATION when FULLSCREEN has not been applied.

As written, the test to this point is a test of "PRESENTATION has been requested when a MINIMIZED state has been requested but not applied because FULLSCREEN hasn't completed". So - if the test fails... is the problem with the MINIMIZED request or the PRESENTATION request?

If the state logic were so complex that this was, in fact, a problematic transition, then yes - it might need to be tested as a longer sequence - but that would also be a flag that there's a much bigger problem with the underlying logic - or, at least, that there's a need for a specific explanation in the docstring associated with the test to clarify the specific set of circumstances, not just "this is a state change known to be a problem".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you, and looking at the tests, I now realize that I have essentially mashed together 2 different tests into one. Thereby, making it harder to follow the test logic and debug on failure. I will separate these tests and report back.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Window maximize & minimize API functionality
3 participants