New functions get and set the YUV colorspace conversion mode:
SDL_SetYUVConversionMode()
SDL_GetYUVConversionMode()
SDL_GetYUVConversionModeForResolution()
SDL_ConvertPixels() converts between all supported RGB and YUV formats, with SSE acceleration for converting from planar YUV formats (YV12, NV12, etc) to common RGB/RGBA formats.
Added a new test program, testyuv, to verify correctness and speed of YUV conversion functionality.
Sylvain
There are various YUV-RGB conversion coefficients, according to https://www.fourcc.org/fccyvrgb.php
I choose the first (from Video Demystified, with integer multiplication),
but the current SDL2 Dither functions use in fact the next one, which follows a specifications called CCIR 601.
Here's a patch to use the second ones and with previous warning corrections.
There are less multiplications involved because Chroma coefficient is 1.
Also, doing float multiplication is as efficient with vectorization.
In the end, the YUV decoding is faster: ~165 ms vs my previous 195 ms.
Moreover, if SDL2 is compiled with -march=native, then YUV decoding time drops to ~130ms, while older ones remains around ~220 ms.
For information, from jpeg-9 source code:
jpeg-9/jccolor.c
* YCbCr is defined per CCIR 601-1, except that Cb and Cr are
* normalized to the range 0..MAXJSAMPLE rather than -0.5 .. 0.5.
* The conversion equations to be implemented are therefore
* Y = 0.29900 * R + 0.58700 * G + 0.11400 * B
* Cb = -0.16874 * R - 0.33126 * G + 0.50000 * B + CENTERJSAMPLE
* Cr = 0.50000 * R - 0.41869 * G - 0.08131 * B + CENTERJSAMPLE
jpeg-9/jdcolor.c
* YCbCr is defined per CCIR 601-1, except that Cb and Cr are
* normalized to the range 0..MAXJSAMPLE rather than -0.5 .. 0.5.
* The conversion equations to be implemented are therefore
*
* R = Y + 1.40200 * Cr
* G = Y - 0.34414 * Cb - 0.71414 * Cr
* B = Y + 1.77200 * Cb
Sylvain
Few issues with YUV on SDL2 when using odd dimensions, and missing conversions from/back to YUV formats.
1) The big part is that SDL_ConvertPixels() does not convert to/from YUV in most cases. This now works with any format and also with odd dimensions,
by adding two internal functions SDL_ConvertPixels_YUV_to_ARGB8888 and SDL_ConvertPixels_ARGB8888_to_YUV (could it be XRGB888 ?).
The target format is hard coded to ARGB888 (which is the default in the internal of the software renderer).
In case of different YUV conversion, it will do an intermediate conversion to a ARGB8888 buffer.
SDL_ConvertPixels_YUV_to_ARGB8888 is somehow redundant with all the "Color*Dither*Mod*".
But it allows some completeness of SDL_ConvertPixels to handle all YUV format.
It also works with odd dimensions.
Moreover, I did some benchmark(SDL_ConvertPixel vs Color32DitherYV12Mod1X and Color32DitherYUY2Mod1X).
gcc-6.3 and clang-4.0. gcc performs better than clang. And, with gcc, SDL_ConvertPixels() performs better (20%) than the two C function Color32Dither*().
For instance, to convert 10 times a 3888x2592 image, it takes ~195 ms with SDL_ConvertPixels and ~235 ms with Color32Dither*().
Especially because of gcc vectorize feature that optimises all conversion loops (-ftree-loop-vectorize).
Nb: I put no image pitch for the YUV buffers. because it complexify a little bit the code and the API :
There would be some ambiguity when setting the pitch exactly to image width:
would it a be pitch of image width (for luma and chroma). or just contiguous data ? (could set pitch=0 for the later).
2) Small issues with odd dimensions:
If width "w" is odd, luma plane width is still "w" whereas chroma planes will be "(w + 1)/2". Almost the same for odd h.
Solution is to strategically substitute "w" by "(w+1)/2" at the good places ...
- In the repository, SDL_ConvertPixels() handles YUV only if yuv source format is exactly the same as YUV destination format.
It basically does a memcpy of pixels, but it's done incorrectly when width or height is odd (wrong size of chroma planes). This is fixed.
- SDL Renderers don't support odd width/height for YUV textures.
This is fixed for software, opengl, opengles2. (opengles 1 does not support it and fallback to software rendering).
This is *not* fixed for D3D and D3D11 ... (and others, psp ?)
Only *two* Dither function are fixed ... not sure if others are really used.
- This is not possible to create a NV12/NV12 texture with the software renderer, whereas other renderers allow it.
This is fixed, by using SDL_ConvertPixels underneath.
- It was not possible to SDL_UpdateTexture() of format NV12/NV21 with the software renderer. this is fixed.
Here's also two testcases:
- that do all combination of conversion.
- to test partial UpdateTexture
Evgeny Kapun
Commit 490bb5b49f11 [1], which was a fix for bug #3790, introduced a new bug: now, calling SDL_FreeSurface(surface) deallocates surface->map even if there are other references to the surface. This is bad, because some functions (such as SDL_ConvertSurface) assume that surface->map is not NULL.
bastien.bouclet
When creating two surfaces and blitting them onto the other, SDL's internal reference counting fails, and one of the surfaces is not freed when calling SDL_FreeSurface.
Example code :
SDL_Surface *s1 = SDL_CreateRGBSurfaceWithFormat(0, 640, 480, 32, SDL_PIXELFORMAT_ARGB8888);
SDL_Surface *s2 = SDL_CreateRGBSurfaceWithFormat(0, 640, 480, 32, SDL_PIXELFORMAT_ARGB8888);
SDL_BlitSurface(s1, NULL, s2, NULL);
SDL_BlitSurface(s2, NULL, s1, NULL);
SDL_FreeSurface(s2);
SDL_FreeSurface(s1);
With this example, s1 is not freed after calling SDL_FreeSurface, its refcount attribute is still positive.
Rainer Deyke
I've written a small patch that adds a small SDL_DuplicateSurface function to SDL. I've written the function as part of a larger (as yet unfinished) patch, but I think this function is useful enough that it merits inclusion in SDL on its own.
Edmund Horner
When a 16-bit "565 format" surface has a colour key set, it will blit with correct transparency. If, however, it has its colour key set then is converted to a 32-bit ARGB format surface, the colour key in the converted image will not necessarily be the same pixel value as the transparent pixels. It may not blit correctly, because the colour key does not match the right pixels.
In my case, with an image using 0xB54A for transparency, the colour key was converted to 180,170,82; but the corresponding pixels (with the same original value) were converted to 180,169,82. Blitting the converted image did not use transparency where expected.
I have attached a test case. The bug has been replicated on both x86_64 Linux (SDL 2.0.2), and 32-bit MS C++ 2010 on Windows (SDL 2.0.0).
Sylvain
Let's you have a SDL_Surface that has ColorKey, but no Alpha Modulation.
When this surface is duplicated with SDL_ConvertSurface function, the result has ColorKey and Alpha Modulation (BLEND, and Opaque 255).
I think SDL_ConvertSurface should strictly keeps the input format.
example
=======
SDL_Surface *input; // ... Set up a surface with ColorKey and no AlphaMod
SDL_Surface *output = SDL_ConvertSurface(input, input->format, input->flags);
// "output" surface has a ColorKey but *also* AlphaMod (BLEND, and Opaque 255).
Simon Hug
The SDL_BlitScaled function runs into an access violation for specific blit coordinates and surface sizes. The attached testcase blits a 800x600 surface to a 1280x720 surface at the coordinates -640,-345 scaled to 1280x720. The blit function that moves the data then runs over and reads after the pixel data from the src surface causing an access violation.
I can't say where exactly it goes wrong, but I think it could have something to do with the rounding in SDL_UpperBlitScaled. final_src.y is 288 and final_src.h is 313. Together that's 601, which I believe is one too much, but I just don't know the code enough to make sure that's the problem.
Sylvain
I think this patch fix the issue, but maybe it's worth re-writing "SDL_UpperBlitScaled" using SDL_FRect.
Daniel Gibson
Currently, SDL_CreateRGBSurface() and SDL_CreateRGBSurfaceFrom() take Uint32 masks for RGBA to "describe" the Pixelformat of the surface.
Internally those value are only used to map to one of the SDL_PIXELFORMAT_* enum values that are used for further processing.
I think it would be both handy and more efficient to be able to specify SDL_PIXELFORMAT_* yourself without using SDL_PixelFormatEnumToMasks() to create masks first, so I implemented functions that do that:
SDL_CreateRGBSurfaceWithFormat() and SDL_CreateRGBSurfaceWithFormatFrom() which are like the versions without "WithFormat" but instead of taking 4 Uint32s for R/G/B/A masks, they take one for a SDL_PIXELFORMAT_* enum value.
Together with https://bugzilla.libsdl.org/show_bug.cgi?id=2923 creating a SDL_Surface* from RGBA data (e.g. from stb_image) is as easy as
surf = SDL_SDL_CreateRGBSurfaceWithFormat(0, w, h, bppToUse*8, SDL_PIXELFORMAT_RGBA32);
The internal function SDL_EGL_LoadLibrary() did not delete and remove a mostly
uninitialized data structure if loading the library first failed. A later try to
use EGL then skipped initialization and assumed it was previously successful
because the data structure now already existed. This led to at least one crash
in the internal function SDL_EGL_ChooseConfig() because a NULL pointer was
dereferenced to make a call to eglBindAPI().