This is obnoxious and wrong, but the patch that activates the Dock before
activating the app fixes the _menu_ not responding on Catalina, but the
first window created by the app won't have keyboard focus without a small
delay inserted.
This obviously needs a better solution, but it gets it limping along correctly
for now.
sjordan
We did some investigations into a different direction which I would like to share. As mentioned previously the scaling setting in the preferences play an important role for our problem and they also hint towards an issue with point/pixel scaling factors.
We found an interesting correlation between our fail case and the behavior of [nsWindow.screen backingScaleFactor]. It turns out that whenever we encounter the fail case the scale factor is zero when we print it quickly after calling SDL_CreateWindow. After some time the value changes to a non-zero value. In the success case the scaling factor is nonzero 'immediately'. Note that we don't use that factor. We also find that the window backingScaleFactor does not show the strange behavior even in the fail case.
We have also attempted to find out whether any event triggers the transition from zero to non-zero. We found the transition happening when we call SDL_PollEvent. We can even force this to happen by explicitly adding a SDL_PollEvent at an early stage, but it will only happen if a certain amount of time elapsed, so we need to add some sleep before the call to trigger the transition at an earlier stage. All that seems to imply that the transition happens async and that SDL_PollEvent merely causes the system to update its internal state at that time.
We have also verified that the scaling setting in the preferences does NOT directly correlate to the scaling factor behavior. We find that a particular scaling setting can lead to a fail case for one resolution and a success case for another resolution. This shows that the scaling setting alone does not determine whether the problem will appear or not.
We have also verified on another Mac with 10.14 that the scaling factor is always non-zero and we always have the success case.
I have no idea how to interpret this initial-zero behavior and haven't found any usable information on the screen backing scale factor. It seems as 10.15 does some stuff more async than before and maybe the problem could be caused by unfortunate timings. I would be very interested to hear your opinion about that.
...
Finally we found the cause of all our problems: it's the origin hack in Cocoa_SetWindowFullscreen:
/* Hack to fix origin on Mac OS X 10.4 */
NSRect screenRect = [[nswindow screen] frame];
if (screenRect.size.height >= 1.0f) {
rect.origin.y += (screenRect.size.height - rect.size.height);
}
If we comment this one out our game and testdraw2 do behave correctly.
It turns out that if a window is not fully contained in the screen, it's screen property becomes zero and therefore we saw a zero when printing the backing scale factor (although it's not clear why it became nonzero later).
We suggest to add a runtime check which skips this code for 10.15 (or possibly earlier if you happen to know that the hack is not needed for certain older versions).
More info: consider the line
NSRect screenRect = [[nswindow screen] frame];
in Cocoa_SetWindowFullscreen. We found that this rect has the dimensions of the desktop
on our OS X 10.15 setup. This is true both for the success case and the fail case. It seems as the success case is actually a fail case in disguise.
On the other Mac with OS X 10.14 the same rect has the dimension of the newly created screen. This is what I would expect, because at that time the window has already been created successfully and there should be a newly created screen associated to the window.
What are the cases in which the whole origin conversion code for the fullscreen case is supposed to have a non-trivial result?
Today we found that if we print the dimensions of [nswindow screen] later, then we find them to be correct. So the conclusion seems to be that OS X 10.15 does indeed do the window/screen setup more async than before and that the origin correction code uses the [nswindow screen] at a time where the window/screen setup isn't finalized yet.
It was done to allow hotkey resizing of borderless windows, but Windows will sometimes draw it, regardless of our WM_* message handling. See bug 4466 for more details.
Alex Denisov
When using Win10 on-screen keyboard (tooltip.exe), the left and right cursor keys in it do not produce SDLK_LEFT and SDLK_RIGHT events.
Windows messages generated by the on-screen keyboard, for some reason, have their scancodes set to zeroes. Here is the log from Spy++:
WM_KEYDOWN nVirtKey:VK_LEFT cRepeat:1 ScanCode:00 fExtended:0 fAltDown:0 fRepeat:0 fUp:0
WM_KEYUP nVirtKey:VK_LEFT cRepeat:1 ScanCode:00 fExtended:0 fAltDown:0 fRepeat:1 fUp:1
Regular physical keyboard produces VK_LEFT (ScanCode:4B) and VK_RIGHT (ScanCode:4D) which are interpreted correctly.
With on-screen keyboard, the switch statement in VKeytoScancode() does not check for VK_LEFT and VK_RIGHT, returning SDL_SCANCODE_UNKNOWN, which in turn does not get mapped to anything (because the scan codes are zeroes).
Anthony Pesch
* Remove triple buffering support. As far as I can tell, this goes against the libdrm API; the EGL implementations themselves control the buffering. Removing it isn't absolutely necessary as it seemingly works on the Pi at least, but I noticed this while doing my work and explained my reasoning in the commit.
* Replace the crtc_ready logic which allocates an extra bo to perform the initial CRTC configuration (which is required before calling drmModePageFlip) with a call to drmModeSetCrtc after the front and back buffers are allocated, avoiding this allocation.
* Standardized the SDL_*Data variable names and null checks to improve readability. Given that there were duplicate fields in each SDL_*Data structure, having generic names such as "data" at times was very confusing.
* Removed unused fields from the SDL_*Data structures and moves all display related fields out of SDL_VideoData and into SDL_DisplayData. Not required since the code only supports a single display right now, but this was helpful in reading and understanding the code initially.
* Implement KMSDRM_GetDisplayModes / KMSDRM_SetDisplayMode to provide dynamic modeset support.
These changes have been tested on a Raspberry Pi 4 and a Dell XPS laptop with an HD 520.
As an update, I went back over the triple buffer changes and left them in. I didn't entirely get the code originally, I had just seen it calling KMSDRM_gbm_surface_lock_front_buffer twice for a single swap and had removed it because I was paranoid of bugs stemming from it while working on the modeset changes.
I've made a few small changes to the logic that had thrown me off originally and rebased the changes:
* The condition wrapping the call to release buffer was incorrect.
* The first call to KMSDRM_gbm_surface_lock_front_buffer has been removed. I don't understand why it existed.
* Added additional comments describing what was going on in the code (as it does fix the buffer release pattern of the original code before it).
Martin Fiedler
To be precise, this is about *desktop OpenGL* on X11. For OpenGL ES, EGL is already used (as it's the only way to get an OpenGL ES context), as Sylvain noted above.
To shine some light on why this is needed:
In 99% of all cases, using GLX on X11 is fine, even though it's effectively deprecated in favor of EGL [1]. However, there's at least one use case that *requires* the OpenGL context being created with EGL instead of GLX, and that's DRM_PRIME interoperability: The function glEGLImageTargetTexture2DOES simply doesn't work with GLX. (Currently, Mesa actually crashes when trying that.)
Some example code:
https://gist.github.com/kajott/d1b29c613be30893c855621edd1f212e
Runs on Intel and open-source AMD drivers just fine (others unconfirmed), but with #define USE_EGL 0 (i.e. forcing it to GLX), it crashes. The same happens when using SDL for window and context creation.
The good news is that most of the pieces for EGL support on X11 are already in place: SDL_egl.c is pretty complete (and used for desktop OpenGL on Wayland, for example), and SDL_x11opengl.c has the aforementioned OpenGL-ES-on-EGL support. However, when it comes to desktop OpenGL, it's hardcoded to fall back to GLX.
I'm not advocating to make EGL the default for desktop OpenGL on X11; don't fix what ain't broken. But something like an SDL_HINT_VIDEO_X11_FORCE_EGL would be very appreciated to make use cases like the above work with SDL.
[1] source: Eric Anholt, major Linux graphics stack developer, 7 years ago already - see last paragraph of https://www.phoronix.com/scan.php?page=news_item&px=MTE3MTI
Luis Caceres
The current handling of Wayland mouse pointer events only handles wl_pointer.axis events, which, according to the Wayland documentation, deal with mouse wheel scroll events on a continuous scale. While this is reasonable for some input sources (e.g. touchpad two-finger scrolling), it is not for mouse wheel clicks which generate wl_pointer.axis events with large deltas.
This patch adds handling for wl_pointer.axis_discrete and wl_pointer.frame events and prefers to report SDL_MouseWheelEvent in discrete units if they are available. This means that for mouse wheel scrolling we count in clicks, but for touchpad two-finger scrolling we still use whatever units Wayland uses. This behaviour is closer to that of the X11 backend.
Since these events are only available since version 5 of the wl_seat interface, this patch also checks for this and falls back to the previous behaviour if its not available. I also had to add definitions for some of the pointer and keyboard events specified in versions 2-5 but these are just stubs and do nothing.
Make sure the thread is actually paused, and context backep-up, before
SurfaceView is destroyed (eg surfaceDestroyed() actually returns).
Add a timeout when surfaceDestroyed() is called, and check 'backup_done' variable.
It prevents crashes like:
#00 pc 000000000000c0d0 /system/lib64/libutils.so (android::RefBase::incStrong(void const*) const+8)
#01 pc 000000000000c7f4 /vendor/lib64/egl/eglSubDriverAndroid.so (EglAndroidWindowSurface::UpdateBufferList(ANativeWindowBuffer*)+284)
#02 pc 000000000000c390 /vendor/lib64/egl/eglSubDriverAndroid.so (EglAndroidWindowSurface::DequeueBuffer()+240)
#03 pc 000000000000bb10 /vendor/lib64/egl/eglSubDriverAndroid.so (EglAndroidWindowSurface::GetBuffer(EglSubResource*, EglMemoryDesc*)+64)
#04 pc 000000000032732c /vendor/lib64/egl/libGLESv2_adreno.so (EglWindowSurface::UpdateResource(EsxContext*)+116)
#05 pc 0000000000326dd0 /vendor/lib64/egl/libGLESv2_adreno.so (EglWindowSurface::GetResource(EsxContext*, EsxResource**, EsxResource**, int)+56)
#06 pc 00000000002ae484 /vendor/lib64/egl/libGLESv2_adreno.so (EsxContext::AcquireBackBuffer(int)+364)
#07 pc 0000000000249680 /vendor/lib64/egl/libGLESv2_adreno.so (EsxContext::Clear(unsigned int, unsigned int, unsigned int, EsxClearValues*)+1800)
#08 pc 00000000002cb52c /vendor/lib64/egl/libGLESv2_adreno.so (EsxGlApiParamValidate::GlClear(EsxDispatch*, unsigned int)+132)
Improve handling of landscape/portrait orientation. Promote to SCREEN_ORIENTATION_SENSOR_* when needed.
Android window can be somehow resizable.
If SDL_WINDOW_RESIZABLE is set, window size change is allowed, for instance when orientation changes (provided the hint allows it).
Konrad
I took the liberty of rewriting this function a bit as it seemed to be unnecessary extended with ifs regarding flags (we can check everything in one pass which seem to be the thing which confuses Visual C++ 2019 as well).
Also, I have made CPU features an int instead of uint because if we check it against flags which are all ints it might as well just be int (no signed/unsigned bitwise comparison).
Konrad
This kind of blending is rather quite useful and in my opinion should be available for all renderers. I do need it myself, but since I didn't want to use a custom blending mode which is supported only by certain renderers (e.g. not in software which is quite important for me) I did write implementation of SDL_BLENDMODE_MUL for all renderers altogether.
SDL_BLENDMODE_MUL implements following equation:
dstRGB = (srcRGB * dstRGB) + (dstRGB * (1-srcA))
dstA = (srcA * dstA) + (dstA * (1-srcA))
Background:
https://i.imgur.com/UsYhydP.png
Blended texture:
https://i.imgur.com/0juXQcV.png
Result for SDL_BLENDMODE_MOD:
https://i.imgur.com/wgNSgUl.png
Result for SDL_BLENDMODE_MUL:
https://i.imgur.com/Veokzim.png
I think I did cover all possibilities within included patch, but I didn't write any tests for SDL_BLENDMODE_MUL, so it would be lovely if someone could do it.
Use XGetKeyboardControl to initialize the current XKeyboardState, and
skip XAutoRepeatOn invocation if global_auto_repeat is AutoRepeatModeOn.
This fixes SDL2 when the X11 client is untrusted.
Do not try to guess MIT_SHM extension availability from the string
returned by XDisplayName, use the appropriate API instead.
This fixes SDL2 inside hasher.
Konrad
This was something rather trivial to add, but asked at least several times before (I did google about it as well).
It should be possible to dynamically change scaling mode of the texture. It is actually trivial task, but until now it was only possible with a hint before creating a texture.
I needed it for my game as well, so I took the liberty of writing it myself.
This patch adds following functions:
SDL_SetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode scaleMode);
SDL_GetTextureScaleMode(SDL_Texture * texture, SDL_ScaleMode *scaleMode);
That way you can change texture scaling on the fly.
Message in the log, when going to background:
"call to OpenGL ES API with no current context (logged once per thread)"
Because of SDL_WINDOWEVENT_MINIMIZED is sent from the Java Activity thread.
It calls SDL_RendererEventWatch(), _WindowEvent() and glFinish() without context.
Solution is to move sending of SDL_WINDOWEVENT_MINIMIZED to the SDL thread.
For some obscure reason, the order in which the libdrm/libgbm libraries
are loaded matters.
Without this fix, the first call to check_modesetting() will work and
load then unload all symbols properly, but the second call to this
function will lock up as soon as dlopen() is called on libdrm.
Swapping the order in which the libdrm and libgbm libraries are loaded
is enough to fix (or work around?) this issue.
Fixes#4891:
https://bugzilla.libsdl.org/show_bug.cgi?id=4891
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Aaron Barany
I realized I made a minor mistake in my patch: I changed the constructor prototype for SDL_DisplayData, but didn't update the declaration in the .h file. The compiler and linker don't complain, but it would probably be best to fix in case a later change runs into a problem from the mismatch. I have attached a patch to fix this.
Aaron Barany
There appears to be no way to directly access the display DPI on iOS, so as an approximation the DPI for the iPhone 1 is used as a base value and is multiplied by the screen's scale. This should at least give a ballpark number for the various screen scales. (based on https://stackoverflow.com/questions/25756087/detecting-iphone-6-6-screen-sizes-in-point-values it appears that both 2x and 3x are used)
I have updated the patch to use a table of current devices and use a computation as a fallback. I have also updated the fallback computation to be more accurate.
Aaron Barany
Add SDL_HINT_VIDEO_EXTERNAL_CONTEXT hint to notify SDL that the graphics context is external. This disables the automatic context save/restore behavior on Android and avoids using OpenGL by default when SDL_WINDOW_VUKLAN isn't set.
When the application wishes to manage the OpenGL contexts on Android, this avoids cases where SDL unbinds the context and creates new contexts, which can interfere with the application's operation.
When using Vulkan and Metal renderer implementations, this avoids SDL forcing OpenGL to be enabled on certain platforms. While using the SDL_WINDOW_VULKAN flag can be used to achieve the same thing, it also causes Vulkan to be loaded. If the application uses Vulkan directly, this is not necessary, and fails window creation when using Metal due to Vulkan not being present. (assuming MoltenVK isn't installed)
Aaron Barany
Since OpenGL is deprecated on iOS, it is advantageous to be able to remove all OpenGL related code when building SDL for iOS. This patch adds the necessary #if checks to compile in this case.
SDL_SendWindowEvent will only send a RESTORED event if the window has
the minimized or maximized flag set. However, for a SHOWN event, it
will clear the minimized flag. Since the SHOWN event was being sent
first for a MapNotify event, the RESTORED event would never be sent.
Swapping the SendWindowEvent calls around fixes this.
https://bugzilla.libsdl.org/show_bug.cgi?id=4821
Eric Shepherd
Currently, SDL on Cocoa macOS creates a rudimentary menu bar programmatically if none is already present when the app is registered during setup.
SDL could be much more easily and flexibly used on macOS if upon finding that no menus are currently in place, it first looked for the application's main menu nib or xib file and, if found, loaded that instead of programmatically building the menus.
This would then let developers simply drop in a nib file with a menu bar defined in it and it would be installed and used automatically.
Attached is a patch that does just this. It changes the SDL_cocoaevents.m file to:
* In Cocoa_RegisterApp(), before calling CreateApplicationMenus(), it calls a new function, LoadMainMenuNibIfAvailable(), which attempts to load and install the main menu nib file, using the nib name fetched from the Info.plist file. If that succeeds, LoadMainMenuNibIfAvailable() returns true; otherwise false.
* If LMMNIA() returns false, CreateApplicationMenus() is called to programmatically build the menus as before.
* Otherwise, we're done, and using the menus from the nib/xib file!
I made these changes to support a project I'm working on, and felt they were useful enough to be worth offering them for uplift. They should have zero impact on existing projects' behavior, but make Cocoa SDL development miles easier.
(note from PulkoMandy on Bugzilla #4442 about why this is a desirable patch:
"The event mask: note that the window and GL view run in their own thread
which I don't expect to be too much CPU bound, and will quickly pop these
messages and forward them to the main thread in our SDL code. Therefore the
B_NO_POINTER_HISTORY should be no problem, and is the default on Haiku
anyway (it was not in BeOS, but we changed that and added a
B_FULL_POINTER_HISTORY flag to request the old behavior explicitly). So, this
seems fine.")
Partially fixes Bugzilla #4442.
Solra Bizna
I have written a program that, in the event that the user requests more MSAA samples than their hardware supports, attempts to gracefully fall back to the best MSAA available. This code works with my conventional OpenGL renderer, but if I change nothing about the code except to make it request an OpenGL ES profile instead, Xlib kills the program with an error that looks like:
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 4 (X_DestroyWindow)
Resource id in failed request: 0x5c00008
Serial number of failed request: 188
Current serial number in output stream: 193
To trigger the bug, attempt to create a window with the SDL_WINDOW_OPENGL flag, with SDL_GL_CONTEXT_PROFILE_MASK set to SDL_GL_CONTEXT_PROFILE_ES, and with SDL_GL_MULTISAMPLESAMPLES set to any unsupported value. SDL_CreateWindow properly returns NULL, but at this point the program is already doomed. Xlib will shortly terminate the program with an error. Calling SDL_CreateWindow again will immediately trigger this termination.
I have attached a skeletal program that reproduces this bug for me. Replacing SDL_GL_CONTEXT_PROFILE_ES with SDL_GL_CONTEXT_PROFILE_COMPATIBILITY avoids the bug (but, obviously, doesn't create an OpenGL ES context).
As I suspected, the problem was with XDestroyWindow being called twice on the same window. The X11_CreateWindow function in src/video/x11/SDL_x11window.c calls SetupWindowData. If initialization fails after that point, XDestroyWindow gets called on the window by a subsequent call to X11_DestroyWindow. But, later in the same function, iff a GLES context is requested and initializing it fails, X11_XDestroyWindow (which wraps XDestroyWindow) is manually called. Shortly after, the intended call to X11_DestroyWindow occurs, which attempts to destroy the same window again. Boom.
(The above confusing summary involves three separate, similarly-named functions: XDestroyWindow, X11_DestroyWindow, X11_XDestroyWindow)
I have attached a simple patch that removes the redundant X11_XDestroyWindow calls. I've tested that XDestroyWindow still gets called for the windows in question, and that it only gets called once.
- _num_clips was not set in constructor, so a NULL _clips could be
mistakenly dereferenced.
- As _clips is accessible outside the class, it is not a good idea to
free/reallocate it. Try to limit this by reallocating only when it needs to
grow.
Partially fixes Bugzilla #4442.
This can happen if a window is still grabbed when we try to move it, or if
the X11 ecosystem is just in a bad mood, I guess.
This makes sure that SDL will report the correct position for a window;
otherwise, SDL_GetWindowPosition will just report whatever the last
SDL_SetWindowPosition call requested, even if the window didn't actually move.
Fixes Bugzilla #4646.
Much of the heavy lifting of this optimization is lifted from the Pixman
project, which is distributed under an MIT-style license. As far as possible,
these elements have been relicensed to the zlib license.
Fixes an issue in macOS 10.15 where the displayed content would move up after entering, exiting and re-entering exclusive fullscreen when certain display modes were used (bug #4822).
Bug #3949 is also related to this change.
Use eglGetProcAddress for everything on EGL >= 1.5. Try SDL_LoadFunction first
for EGL <= 1.4 in case it's a core symbol, and as a fallback if
eglGetProcAddress fails. Finally, for EGL <= 1.4, fallback to
eglGetProcAddress to catch extensions not exported from the shared library.
(Maybe) Fixes Bugzilla #4794.
Sylvain
Seems to be a regression in this commit: https://hg.libsdl.org/SDL/rev/7fdbffd47c0e
SDL_CalculatePitch() was using format->BytesPerPixel, now it uses SDL_BYTESPERPIXEL().
The underlying issue is that "surface->format->BytesPerPixel" is *not* always the same as SDL_BYTESPERPIXEL(format);
BytesPerPixel defined as format->BytesPerPixel = (bpp + 7) / 8;
vs
#define SDL_BYTESPERPIXEL(format) ... (format & 0xff)
Because of SDL_pixels.h format definitions, one is giving a BytesPP 1, the other 0.
"This patch does the following:
* Instead of SDL_FillRects calling SDL_FillRect in a loop the opposite
happens -- SDL_FillRect (a specific case) calls SDL_FillRects (a general case)
with a count of 1
* The switch/case block is moved out of the loop -- it modifies the color
once and stores the fill routine in a pointer which is then used throughout
the loop"
Fixes Bugzilla #4674.