What I hate from OpenGL API 7


The OpenGL API sucks. It has gotten better over time, but still is miles away from what it should be.

And probably, the API needs a redesign from scratch, something that was promised a long time ago but never happened. There’s often a talk about “API wars” or “DirectX vs OpenGL” but often those debates are biased, from the 90’s, and most importantly, often discussed by people who knows little or nothing about graphics programming.

The best reason to use OpenGL is its cross platform ability, their openness, and in very rare cases, their extensions. But often extensions are more a pain in the neck to support than anything else. Because we rarely see the same extension supported by all major vendors.

The problems with OpenGL are deep in design. It does not fit modern standards. It surely did back in 1990. But it’s aging. But to understand why I say there is a design problem with OpenGL, we need to understand its past.

A history lesson

You’ve probably read “a history lesson about OpenGL“. If not, I highly recommend it. I won’t repeat what’s already been said there. I will however, focus on another side of OpenGL’s history: it’s intention.

Back in 1990, the idea was that OpenGL would be a Graphics Library (it’s in its name, do’h) that would run on all platforms and the same code would produce exactly the same image result.

In other words, OpenGL should’ve been the PDF of the 3D world. If, for example, OpenOffice would use OpenGL to render it’s charts (by the way, I’ve heard it does), the charts would render exactly the same in all platforms, so the document looks the same.

At the time, OpenGL didn’t worry about GPUs not supporting “X” feature. OpenGL had to look the same (give or take a few implementation bugs introduced by driver vendors, or the standard wasn’t clear enough, just like not all HTML pages are rendered exactly in Chrome & Firefox even though they should)

Thy shall support all features!

What this meant is that if a particular GPU didn’t support a feature, it couldn’t fail. The driver had to switch to software rendering to guarantee that the result showed up on screen. OpenGL was made to render 3D graphics. GPUs just happen to be “accelerators” that could do it faster and were built around GL. By constrast, Direct3D was made to access the GPU. That’s a deep architectural difference.

This gives us our first problem: By design, OpenGL has no notion of “unsupported features” by the GPU. You couldn’t ask OpenGL what texture formats are supported, because theoretically all of them are.

Back in the 90′ era, there were no shaders and emulating a rasterizer was quite easy (but slow). Many of you will remember that if a GL application used a single unsupported blending mode, or a rare texture format, BAM! software emulation kicks in and no gpu acceleration for you.

Over time, the standard worked around this problem using extensions: For example, check if ARB_texture_non_power_of_two is present. If so, non-power-of-two textures are supported.

Another way to workaround this issue lately was by incrementing the driver version (GL2, GL3, GL4). Fortunately since GL3, it’s been more DX10-like, and with each GL level and revision a “bare minimum” of features are guaranteed to be supported.

But the core problem remains, and the biggest problem is that this lead to very bad patterns in OpenGL’s evolution: To the date, as far as I know the only way to check for support if a particular texture format is supported is to create it, bind it, and check glGetError. Seriously? Then repeat with the next texture format. This is inefficient and clunky. It also puts a restraint on driver developers, because they have to anticipate every possible combination of settings in order to return a meaningful error status that makes sense.

NVIDIA for example, used to publish a spreadsheet with the GL texture formats supported by their cards. While very informative and useful, it’s embarrassing that it also serves a technical purpose.

Vendor-specific Extensions

Extensions aren’t a problem per-se. But I often read arguments in favour of OpenGL and they list Extensions as a benefit (usually listed as a benefit over Direc3D): You can harness more power out of a particular GPU that you couldn’t use with regular OpenGL or with Direct3D. So, of course, extensions “has to be” a good thing.

Let me be clear on this: IT’S NOT.

Vendor-specific extensions are a pain in the arse. We want our game/simulation to run in most computers as possible. 99% of the time an extension is only supported by one vendor. Or worse, the same feature is supported through 3 different extensions: one for each vendor.

This means that the code “that harness the full power of NVIDIA” won’t work with AMD, nor with Intel. But if use AMD’s extension, that code won’t work with NVIDIA (nor Intel). Like a rock-paper-scissors problem.

The solution to this example is to implement the code three times (not like code duplication is an anti pattern or anything). We also need to test the code on the three GPUs to ensure it actually works.

Vendor-specific extensions made sense in the 90’s

Like I said, the problem with OpenGL is its aging design. Back then when OpenGL was being developed, accelerated graphics rendering was really expensive. SGI (the creators of OpenGL) had a whole industry based on this.

They sold big, expensive graphics rendering equipment for performance critical 3D applications (eg. render farms).

If a company would want to buy several rendering workstations (like Squaresoft), it would be an investment on its own.

Each vendor would use extensions to harness the full power of their hardware, while at the same time making them more attractive the better their extensions were compared to competitors. So a company would decide which HW equipment to buy and fully commit to that vendor. If they actually used the extensions, they would be forced to keep buying the same brand to keep their software compatible without modifications (and to avoid accidentally triggering the sw emulation path).

It wouldn’t make sense to buy half of your (very expensive) equipment from one vendor, and half from another one.

Nowadays it’s not that rare to see one PC with NVIDIA, another with AMD, and another with integrated Intel GPUs in the same living room, let alone an office. Vendor-specific extensions weren’t really thought for the mass market (unless the vendor holds a monopoly).

There are cases in the presentwhere extensions are good, for example, there are some Android-based Mali GLES 3 extensions to get data from the framebuffer directly. I’m not keen on the details,  but it can reduce bandwidth usage drastically (something very important when saving battery life) when using deferred renderers by taking advantage of how tile-based gpus work. A very specific case, only useful for particular architecture, in a particular market (mobile), but with lot to gain by using (it’s worth the trouble) where extensions are useful.

Thread contexts

A major complain recently: OpenGL’s lack of proper multithreaded rendering support. Back in the 90’s threading wasn’t an issue. Most programmers didn’t even know good multithreading practices because HW with multiple cores was very rare or very heterogeneous.

Also GL contexts looked good theory: it was like a “snapshot” of OpenGL’s state that could be saved and restored. Back when rendering was a state machine in a fixed function pipeline; it looked like a good idea.

But the state machine got bigger and bigger, thus making the overhead of saving/restoring the context very expensive; not to mention at some point we moved to a programmable pipeline, which makes things even harder.

Then multicore machines appeared in the mass market, and CPU free lunch was over, the solution lies in using multiple cores. But 9 years since that article was written, OpenGL hasn’t catch up at all.

A look at OpenGL now, 20 years later

This summarizes my big rants on current OpenGL’s state of the art:

  • OpenGL has no well defined, structured way of querying for all of it’s supported features. The API is now a mix of the old mentality “there is no GPU, rendering result is guaranteed” and the Direct3D mentality “the API warps around the GPU”. The presence of extensions is not enough. Luckily the GPU industry has matured enough so that HW vendors are being forced to support a lot of features (i.e. DX10, GL3); but the core issue hasn’t been addressed directly.
  • Extensions have always been controversial, but they are being used to fill the gaps between the industry’s state of the art and OpenGL standard. The ARB is taking a very long time to incorporate them into core. Bindless graphics took 4 years to get promoted to core. If the time gap would be lower, perhaps extensions would be more useful.
  • OpenGL lacks multhreaded support.
  • OpenGL still has a very annoying pattern: BIND TO EDIT. In order to change or modify a resource, it has to be bound first. This not only clunky, it also hinders multithreading opportunities (state-based systems are *much* harder to parallelize than stateless ones) and introduces obscure errors that are hard to find the real culprit (a routine suddenly fails because the programmer forgot to unbind a resource)
  • There’s multiple GLSL compilers out there. There are GL drivers refusing to compile perfectly valid glsl code and GL drivers allowing to compile invalid glsl syntax. Some drivers are better optimizing certain patterns and styles, while other drivers are better at optimizing other styles. And some driver just aren’t good at optimizing glsl. DX got that right, having just one HLSL compiler to rule them all.
  • Lack of resource synchronization flags to correctly hint the driver. glMapBuffer problems and its follow up was mentioned in 2007. Allright. I guess it should’ve been fixed by now. Except it was not. Seriously? C’mon! How can we even begin to discuss threading with OpenGL when we can’t even get reliable lock-free synchronization mechanisms between the GPU & CPU? We’re miles ahead from D3D.

Hey! At least Christophe Riccio is pushing bindless graphics to GL. And like him, I too view the “future is MultiDraw all the way, resource indexing, shade code indexing, 10 draw calls per frame max, 1M draws. Which is why I’m so excited about Mantle (though Riccio doesn’t think that way about Mantle). Nonetheless, if Mantle isn’t supported by other major vendors (NVIDIA, Intel, PowerVR) it’s pretty pointless. We know what happened to Glide. And I don’t know how open Mantle is going to be. If it’s not, I don’t want to trade a wonderful API that depends on Microsoft’s whim to one that depends on AMD’s whims.

I don’t hate OpenGL as a whole!

What I hate are the issues I mention (which are major), but I’m not pro DX and anti GL (nor viceversa). I’d prefer an API that doesn’t depend on Microsoft’s whims (like locking D3D 11.1 to Windows 8.1 with no technical reason to do so), and isn’t locked to a single platform.

Heck, even Linux developers agree that Direct3D 10’s API design is a role model while OpenGL’s state sucks. Too bad Gallium3D doesn’t attract more attention and seems to be dying.

It may have probably been be the best shot at replacing OpenGL and move towards a better API. May be Mantle? May be a better version of OpenGL?

This age is yet another big opportunity for Khronos to put OGL back on track which I hope they don’t miss (again). GL is the only way on mobile (and mobile market is big, in a way that can’t be ignored) and Microsoft is starting to get asleep with no DX12 apparently in the near future. The current HW already works quite different of what DX 11 exposes.

A big change should be coming, but I don’t see it happening, perhaps because it’s coming too slow.


7 thoughts on “What I hate from OpenGL API

  • Jesse

    I generally agree on every point, though I have reservations about a thread-safe state machine. Until better OpenGL debugging tools are available, I think fewer headaches will be had if we restrict all API calls to a single thread. Most DirectX users I know avoid multithreaded rendering for this reason alone, though there are also performance implications such as threads stalling for the state machine. On the other hand, from a practical getting-things-done point of view I would love to have a fast, reasonably debuggable multithreaded rendering system.

    A problem you did not mention is core API implementation differences among vendors, which in my opinion is a bigger deal than faults in the API design or GLSL compilers. This especially makes debugging incredibly tedious since vendors differ on if, when, and what error messages are given. Since the state machine is partially a black box, it is essential that its error reporting be rock solid or much time will be wasted just trying to get things running consistently across vendors. About 70% of my time spent on my GL3+ GSoC project was trying to find correct, clear, cross-platform OpenGL API examples. Deciding whether any specific example I found qualified for all three was often a shot in the dark, even when the sample came directly from the spec! (See the transform feedback specs for how bad this can get, with samples that differ radically from the final API.)
    This state of affairs is simply unacceptable when cross-vendor conformance could be easily enforced. The conformance tests Khronos introduced alongside OpenGL 4.4 will hopefully partially remedy this. Conformance testing is voluntary for OpenGL 3.3 and up drivers, and required for drivers using OpenGL 4.4 and up. If ARB makes the tests open source, it will have another important bonus of giving solid practical examples of how OpenGL features were DESIGNED to be used. This is often ambiguous in the specs, which leads to the users and vendors implementing their ‘interpretations’ of the spec. ‘Interpretation’ is a word that should be reserved for abstract art, not industry standard engineering.

    If every core API spec has nearly exhaustive unit tests associated with it, and the vendors conform to these unit tests before releasing drivers, OpenGL will have a chance of retaining its status as the de facto standard cross-platform graphics API. If not, I am all for a regime change so long as the API governance is at least as open as Khronos, preferably more so.

  • Oliver Knoll

    Sorry, but I don’t get it: first you complain (in “Thy shall support all features”) that you actually want to worry whether a specific feature is GPU-supported or not (and I agree that this is not (easily) possible, but as you said, that is by design), and then you go on to complain that it is a PITA that you now have to worry (in “Vendor-specific Extensions”) whether a specific extension is supported or not.

    So what shall it be then?! Either you want or don’t want to worry whether a specific feature is present, and OpenGL provides you both in forms of Core features and vendor extensions (and yes, they are called vendor extensions for a reason, because one vendor might decide to expose a given hardware feature, whereas another vendor might not).

    One can argue about the time it takes until an extension gets “approved” to become part of Core, but nowadays it’s probably more a question whether/when all the big players (namely nVidia and AMD) both support the feature in hardware.

    So your complaint contradicts itself when you say one the one hand you want to worry whether feature X is supported in hardware, and on the other hand you complain about exactly this when it comes to extensions.

    I just cherry-picked another statement of you: “OpenGL lacks multhreaded support” – now “multithread support” can be quite a sketchy term. Fact is that it is perfectly possible to call OpenGL functionality from different (CPU) threads! As long as each thread has it’s own GL context, and by using shared contexts all draw calls end up in the same scene – perfectly “thread-safe” (or whatever you want to call it). Since OpenGL 1.1, by the way. May I refer you to http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-AsynchronousBufferTransfers.pdf, chapter 28.6 (“Multithreading and Shared Contexts”).

    Also note that by nature OpenGL is very much asynchronous, so most GL calls return way before the actual work is done, so there is no actual need for the OpenGL API itself to provide “thread support” (such has “execute this GL code anytime in the future” worker-threads kind of style functions).

  • Matias Post author

    Hi Oliver!

    It’s not contradictory. The problem with “by design OpenGL is supposed to support all features” is that IN THEORY OpenGL must support the core features. But IN PRACTICE, when such feature is not supported, it silently fallbacks to software emulation or, more commonly nowadays, just flag glgeterror and refuse to work (at least it’s something, but it has a performance issue and introduces issues to the engine’s design we have to account for).
    GL tells me I don’t have to worry about it, but turns out I do have to, and I have a bad tool to do it.

    The paradigm that “In theory we must support everything but in practice we do whatever we want and don’t tell you” is the real problem and my complaint.
    Of course it would be nice not having to worry about it, but at least if I have to worry about it, then **give me a proper way of querying support**. The current practice forces us to trial and error at loading time, which introduces a big lag when booting a GL program.

    This paradigm also puts an unreasonable burden to the GL driver maker: because if we force a combination of parameters the driver maker didn’t anticipate (whether because of a simple mistake, a major bug, a simple oversight, or it is a rare combination), the driver is likely to crash or render with glitches. Not to mention the cpu overhead since the driver has to check that everything is ok, instead of just assuming we fed the right parameters because they told us which parameters were supported.

    Direct3D instead evolved around GPU support, and hence exposed GPU features through the “capability flags”. Although it was widely hated back then (due to how heterogeneous the cap bits support was), the capability flags were a major improvement over “trial and error”.

    As GPUs evolved and different vendors started to converge, Direct3D 10 evolved in the same way and forced a guaranteed minimum of features and texture formats, while providing a query interface for those features that were optional. Those GPUs that don’t support a guaranteed feature cannot legally claim they support D3D10.

    It’s GL 4.3 by now, and to my knowledge, there is still no way to query supported textures formats without trial and error (correct me if I’m wrong); and the way we know which texture formats are “probably guaranteed” is by looking what GL version maps to D3D (GL3 -> D3D10; GL4 -> D3D11) and see D3D’s guaranteed texture format support (since the GPU ought to have it).
    When you have to look at D3D specs to infer which GL specs are guaranteed, it is pretty lame.
    Some features are guaranteed starting GL3, and that is indeed very nice, but others (like texture formats) have been unattended.

    As for extensions, my problem is not that I don’t want to worry about supporting specific features. One thing is to have an “optional feature” that when supported, the interface and behavior is exactly the same (or very similar) on those devices that support it, and another thing is to have “an extension”, in which in order to achieve exactly the same result, the interface and behavior is completely different depending on the vendor.

    As for multithreading, thanks for that link. I will later read it in more detail, but it appears I was wrong on that end and sounds pretty much what I was looking for.

      • Matias Post author

        ARB_internalformat_query was so useless they had to release a new extension, ARB_internalformat_query2, in 2012. Just one year before this article was written (and thus wasn’t available at all machines, and wasn’t nearly as known as it is now).
        Not to mention this extension is not available on most Android devices, oops.

        It took nearly 20 years to the Board to include functionality that should have been since version 1.1 or 1.4.
        Not to mention, there are still bizarre things like not being able to query what format an existing texture is; such functionality wasn’t added until OpenGL 4.5, that’s only half a year ago.
        If you want to know the format of an existing texture on a pre GL-4.5 driver, you have to bind and check glGetError.

        Things that have been commonly available for **decades** in Direct3D were only integrated less than a year ago.
        I like modern GL 4.4/4.5 because finally it’s something tha is not only useable, but also very powerful and flexible. But the bad reputation didn’t come from nowhere.

        Fortunately Vulkan will wash and clear all the sins of the past.

  • TT

    While I generally agree with your reasoning, I also don’t agree with your description of OpenGL and multithreading. AFAIK, no API supports true multithreaded rendering (after all, the render operations must be performed in order, so its synchronised somewhere). The most useful utilisation of multithreading here is resource preparation. And OpenGL has supported that for at least a decade with context sharing. You can create your textures, buffers, compile shaders etc. in another contexts at a different thread and share them with your main context. You can even render your shadow maps etc. on a different thread at the same time. I am quite surprised that this feature is not better known. E.g. Apple always points this out in their developer documents, they even give you very convenient API for asynchronous texture loads (via GLKTextureLoader utility class).

  • TT

    P.S. Just saw that Oliver wrote exactly the same thing above. Sorry for not paying attention 🙂

Comments are closed.