When possible use native os functions to make a blocking call waiting for
an incoming event. Previous behavior was to continuously poll the event
queue with a small delay between each poll.
The blocking call uses a new optional video driver event,
WaitEventTimeout, if available. It is called only if an window
already shown is available. If present the window is designated
using the variable wakeup_window to receive a wakeup event if
needed.
The WaitEventTimeout function accept a timeout parameter. If
positive the call will wait for an event or return if the timeout
expired without any event. If the timeout is zero it will
implement a polling behavior. If the timeout is negative the
function will block indefinetely waiting for an event.
To let the main thread sees events sent form a different thread
a "wake-up" signal is sent to the main thread if the main thread
is in a blocking state. The wake-up event is sent to the designated
wakeup_window if present.
The wake-up event is sent only if the PushEvent call is coming
from a different thread. Before sending the wake-up event
the ID of the thread making the blocking call is saved using the
variable blocking_thread_id and it is compared to the current
thread's id to decide if the wake-up event should be sent.
Two new optional video device methods are introduced:
WaitEventTimeout
SendWakeupEvent
in addition the mutex
wakeup_lock
which is defined and initialized but only for the drivers supporting the
methods above.
If the methods are not present the system behaves as previously
performing a periodic polling of the events queue.
The blocking call is disabled if a joystick or sensor is detected
and falls back to previous behavior.
By default, we will minimize the window when we receive Alt+Tab with a
full-screen keyboard grabbed window to allow the user to escape the
full-screen application.
Some applications like remote desktop clients may want to handle Alt+Tab
themselves, so provide an opt-out via SDL_HINT_ALLOW_ALT_TAB_WHILE_GRABBED=0.
Michael Roe
The mappings for keyboard scancodes on Linux do not include keypad left and right parentheses (used on some Microsoft keyboard), keypad plus/minus, LANG1 and LANG2 (used on Korean keyboards), XK86MenuKB, and F20 (remapped to Audio Mic Mute in the usual X11 config).
The 10 ms delay effectively caps input polling at 100 Hz and rendering
at 100 FPS if applications use these functions in their event loop. The
delay may also lead to dropped frames even at 60 FPS due if they are
unlucky enough to hit the delay and rendering takes longer than 6 ms.
The X11 target sets mouse->last_x and last_y in EnterNotify and then calls
SDL_SendMouseMotion(), which throws away the new position because it matches
the mouse->last_x and last_y we just set, meaning that if the pointer is
in the window when it created, SDL_GetMouseState() will report a position of
0,0 until a MotionNotify event (the pointer moves) arrives and corrects the
mouse state.
Mostly fixes Bugzilla #1612.
superfury
I notice that, somehow, when locking the mouse into place(using SDL_SetRelativeMouseMode), somehow at least the movement information gets through to both mouse movement and touch movement events?
My app handles both, so when moving a touched finger accross the app(using RDP from an Android device) I see the mouse moving inside the app when it shouldn't(meaning that the touch movement is ignored properly by the app(press-location dependant) but the mouse movement is still performed due to the mouse movement events)?
This time, we make anything we think is a MacBook trackpad report its touches
as SDL_MOUSE_TOUCHID, even though they're not _actually_ synthesized events,
and let all mouse input--even if the OS synthesized it from a multitouch
trackpad on our behalf--look like physical input. This is backwards from
reality, but produces the results most apps will expect.
Note that if you have a real touch device that doesn't appear to be the
trackpad, it'll produce real touch events with unique device ids, so it's
not a total loss here, but also note that the way we decide if it was the
trackpad is an imperfect heuristic; it happens to work out right now, but
it's not impossible that a real touchscreen could come to the Mac at some
point and (incorrectly?) call it a "mouse" input, etc.
But for now, good enough.
Fixes Bugzilla #4690.
Max Waine
SDL_mouse.c, if compiled for Windows, requires GetDoubleClickTime to compile (available from winuser.h). Without Vulkan present this fails to compile as the include chain for winuser.h is the following.
SDL_mouse.c -> SDL_sysvideo.h -> SDL_vulkan_internal.h -> SDL_windows.h -> windows.h -> winuser.h.
Problem is that SDL_vulkan_internal.h doesn't include SDL_windows.h if Vulkan isn't present, so under MinGW/GCC it will give a -Wimplicit-function-declaration warning for GetDoubleClickTime, and under MSVC fails to compile completely.
The solution to this would be to simplify the include chain: including SDL_windows.h under the same condition as GetDoubleClickTime (#ifdef __WIN32__) in SDL_mouse.c (or another file that isn't quite so indirectly included).