CRM32Pro SDK
v5.22
|
We have used th glSDL v0.8 wrapper (http://olofson.net/mixed.html) converting it into a SDL video backend adding support for some nice effects and fixing some memory leaks.
http://icps.u-strasbg.fr/~marchesin/sdl/glsdl.html Guidelines for using the glSDL video backend for SDLglSDL tries hard to be fully SDL-compliant. That means the following advice is only guidelines, not absolute rules, and your program will work and will give correct results even if you don't follow it. However, you'll gain a lot, performance-wise, from following it (read : if you don't, rendering will be slower than any software mode, but if you do, you might very well get the best possible performance for your platform). Moreover, if you follow those guidelines, the speed of most other video backends will probably be improved too. Basic glSDL internalsglSDL is a video backend for SDL that uses OpenGL. To achieve hardware acceleration, the surfaces that it's given are converted to OpenGL textures before being blitted to the screen. This special functioning has some implications for performance, as the cost for creating and/or modifying an OpenGL texture is usually quite high. Similarly, reading from video memory with OpenGL is slow and should be avoided. The surface format you decide to use also impacts the application speed. This document tries to explain all the things to do and not to do when using glSDL.ConventionsIn this document, the following conventions are used :
A backend is, in SDL parlance, the underlying driver SDL uses on the current platform (example of drivers : X11, directX, framebuffer, ascii art...). To choose which backend you want to use, you have to use the environment variable SDL_VIDEODRIVER. For example, to use glSDL, you should do the following (under linux/bash-style shell) :
export SDL_VIDEODRIVER=glSDL
Video InitializationYou should let SDL choose the bpp, either by requesting a bpp of 0 or by using the SDL_ANYFORMAT flag during SDL_SetVideoMode. If you want to make use of the OpenGL acceleration, you should also request a hardware surface (explicitly by asking for an SDL_HWSURFACE or implicitly by asking a SDL_DOUBLEBUF surface). Also keep in mind that a hardware accelerated single buffered video surface will cause a lot of tearing/blinking, so the best solution is probably to use a double buffer all the time :
SDL_SetVideoMode(640, 480, 0, SDL_DOUBLEBUF); As creating a shadow surface would disable any OpenGL
acceleration, glSDL always satisfies the requested bpp. glSDL is tested
at 8, 15, 16, 24 and 32 bpp ; other bpps are untested, but you are
welcome to report results and we would be happy to fix any problem you
encounter. Also keep in mind that SDL_SetVideoMode will fail if your
system or your current video mode doesn't support OpenGL (like for
example XFree86 running at 8bpp).
If you don't request 0 bpp, glSDL will have to convert your hardware
surfaces to 32 bpp before being able to create a texture (indeed newer
glSDL versions running on OpenGL 1.2+ might remove this limitation for
some of the pixel formats, but it's not done at the moment).
If you don't request a hardware surface, a shadow buffer will be
created and OpenGL acceleration won't be used at all.
Note : ideally, if you want to be user-friendly you'll want to provide the user with a means of setting the bpp like a menu option or a command line switch. Use hardware surfacesUse hardware surfaces when the pixel contents is static and when the main purpose is on-screen blitting. To create a hardware surface, set the SDL_HWSURFACE flag during surface creation :
surface = SDL_CreateRGBSurface(SDL_HWSURFACE, width, height, bpp, rmask, gmask, bmask, amask);
If you have to use procedural surfaces (a.k.a dynamic surfaces,
surfaces that are modified all the time) you shoud consider using
software surfaces instead. However, blitting software surfaces is slow,
so here are ways to avoid using software surfaces :
Surface formatUse SDL_DisplayFormat()/SDL_DisplayFormatAlpha() on surfaces that will be blitted to screen. Using it has an impact not only on glSDL (where it will set the SDL_HWSURFACE flag), but also on other video backends :
SDL_Surface * tmp;
and for surfaces with alpha :tmp = SDL_DisplayFormat(surface); SDL_FreeSurface(surface); surface = tmp;
SDL_Surface * tmp;
Note : SDL_DisplayFormat*() calls might have an impact with procedural
surfaces, i.e. surfaces that are constantly modified. The simplest way
to go is to use 24 or 32 bpp surfaces (which glSDL is able to handle
directly) although that might not be friendly with other video
backends.
tmp = SDL_DisplayFormatAlpha(surface); SDL_FreeSurface(surface); surface = tmp; Surface propertiesThe following guidelines apply to choose your surface properties :
Notes :
Locking/unlockingLocking/unlocking has a very high cost with glSDL :
BlittingTo avoid ending up uploading surfaces to video memory every frame, and thus getting a visible slow down in your program, you should :
Also note that glSDL uses a lazy uploading scheme, i.e. the surface is uploaded to video memory only when it's needed so the first blit is slower than subsequent ones. As a rule of thumb, mixing software and hardware surfaces in blits is not the way to go. Hardware/hardware blits will probably happen in video memory, and likewise, software/software blits will happen in system memory. But Hardware/software blits will most likely end up in moving the hardware surface to system memory first, which is a very slow operation. Known problems
Small FAQ
ContactIn case you want to contact the authors :)david@olofson.net stephane.marchesin@wanadoo.fr |