The OpenGL API sucks. It has gotten better over time, but still is miles away from what it should be.
And probably, the API needs a redesign from scratch, something that was promised a long time ago but never happened. There’s often a talk about “API wars” or “DirectX vs OpenGL” but often those debates are biased, from the 90’s, and most importantly, often discussed by people who knows little or nothing about graphics programming.
The best reason to use OpenGL is its cross platform ability, their openness, and in very rare cases, their extensions. But often extensions are more a pain in the neck to support than anything else. Because we rarely see the same extension supported by all major vendors.
The problems with OpenGL are deep in design. It does not fit modern standards. It surely did back in 1990. But it’s aging. But to understand why I say there is a design problem with OpenGL, we need to understand its past.
A history lesson
You’ve probably read “a history lesson about OpenGL“. If not, I highly recommend it. I won’t repeat what’s already been said there. I will however, focus on another side of OpenGL’s history: it’s intention.
Back in 1990, the idea was that OpenGL would be a Graphics Library (it’s in its name, do’h) that would run on all platforms and the same code would produce exactly the same image result.
In other words, OpenGL should’ve been the PDF of the 3D world. If, for example, OpenOffice would use OpenGL to render it’s charts (by the way, I’ve heard it does), the charts would render exactly the same in all platforms, so the document looks the same.
At the time, OpenGL didn’t worry about GPUs not supporting “X” feature. OpenGL had to look the same (give or take a few implementation bugs introduced by driver vendors, or the standard wasn’t clear enough, just like not all HTML pages are rendered exactly in Chrome & Firefox even though they should)
Thy shall support all features!
What this meant is that if a particular GPU didn’t support a feature, it couldn’t fail. The driver had to switch to software rendering to guarantee that the result showed up on screen. OpenGL was made to render 3D graphics. GPUs just happen to be “accelerators” that could do it faster and were built around GL. By constrast, Direct3D was made to access the GPU. That’s a deep architectural difference.
This gives us our first problem: By design, OpenGL has no notion of “unsupported features” by the GPU. You couldn’t ask OpenGL what texture formats are supported, because theoretically all of them are.
Back in the 90′ era, there were no shaders and emulating a rasterizer was quite easy (but slow). Many of you will remember that if a GL application used a single unsupported blending mode, or a rare texture format, BAM! software emulation kicks in and no gpu acceleration for you.
Over time, the standard worked around this problem using extensions: For example, check if ARB_texture_non_power_of_two is present. If so, non-power-of-two textures are supported.
Another way to workaround this issue lately was by incrementing the driver version (GL2, GL3, GL4). Fortunately since GL3, it’s been more DX10-like, and with each GL level and revision a “bare minimum” of features are guaranteed to be supported.
But the core problem remains, and the biggest problem is that this lead to very bad patterns in OpenGL’s evolution: To the date, as far as I know the only way to check for support if a particular texture format is supported is to create it, bind it, and check glGetError. Seriously? Then repeat with the next texture format. This is inefficient and clunky. It also puts a restraint on driver developers, because they have to anticipate every possible combination of settings in order to return a meaningful error status that makes sense.
NVIDIA for example, used to publish a spreadsheet with the GL texture formats supported by their cards. While very informative and useful, it’s embarrassing that it also serves a technical purpose.
Extensions aren’t a problem per-se. But I often read arguments in favour of OpenGL and they list Extensions as a benefit (usually listed as a benefit over Direc3D): You can harness more power out of a particular GPU that you couldn’t use with regular OpenGL or with Direct3D. So, of course, extensions “has to be” a good thing.
Let me be clear on this: IT’S NOT.
Vendor-specific extensions are a pain in the arse. We want our game/simulation to run in most computers as possible. 99% of the time an extension is only supported by one vendor. Or worse, the same feature is supported through 3 different extensions: one for each vendor.
This means that the code “that harness the full power of NVIDIA” won’t work with AMD, nor with Intel. But if use AMD’s extension, that code won’t work with NVIDIA (nor Intel). Like a rock-paper-scissors problem.
The solution to this example is to implement the code three times (not like code duplication is an anti pattern or anything). We also need to test the code on the three GPUs to ensure it actually works.
Vendor-specific extensions made sense in the 90’s
Like I said, the problem with OpenGL is its aging design. Back then when OpenGL was being developed, accelerated graphics rendering was really expensive. SGI (the creators of OpenGL) had a whole industry based on this.
They sold big, expensive graphics rendering equipment for performance critical 3D applications (eg. render farms).
If a company would want to buy several rendering workstations (like Squaresoft), it would be an investment on its own.
Each vendor would use extensions to harness the full power of their hardware, while at the same time making them more attractive the better their extensions were compared to competitors. So a company would decide which HW equipment to buy and fully commit to that vendor. If they actually used the extensions, they would be forced to keep buying the same brand to keep their software compatible without modifications (and to avoid accidentally triggering the sw emulation path).
It wouldn’t make sense to buy half of your (very expensive) equipment from one vendor, and half from another one.
Nowadays it’s not that rare to see one PC with NVIDIA, another with AMD, and another with integrated Intel GPUs in the same living room, let alone an office. Vendor-specific extensions weren’t really thought for the mass market (unless the vendor holds a monopoly).
There are cases in the presentwhere extensions are good, for example, there are some Android-based Mali GLES 3 extensions to get data from the framebuffer directly. I’m not keen on the details, but it can reduce bandwidth usage drastically (something very important when saving battery life) when using deferred renderers by taking advantage of how tile-based gpus work. A very specific case, only useful for particular architecture, in a particular market (mobile), but with lot to gain by using (it’s worth the trouble) where extensions are useful.
A major complain recently: OpenGL’s lack of proper multithreaded rendering support. Back in the 90’s threading wasn’t an issue. Most programmers didn’t even know good multithreading practices because HW with multiple cores was very rare or very heterogeneous.
Also GL contexts looked good theory: it was like a “snapshot” of OpenGL’s state that could be saved and restored. Back when rendering was a state machine in a fixed function pipeline; it looked like a good idea.
But the state machine got bigger and bigger, thus making the overhead of saving/restoring the context very expensive; not to mention at some point we moved to a programmable pipeline, which makes things even harder.
Then multicore machines appeared in the mass market, and CPU free lunch was over, the solution lies in using multiple cores. But 9 years since that article was written, OpenGL hasn’t catch up at all.
A look at OpenGL now, 20 years later
This summarizes my big rants on current OpenGL’s state of the art:
- OpenGL has no well defined, structured way of querying for all of it’s supported features. The API is now a mix of the old mentality “there is no GPU, rendering result is guaranteed” and the Direct3D mentality “the API warps around the GPU”. The presence of extensions is not enough. Luckily the GPU industry has matured enough so that HW vendors are being forced to support a lot of features (i.e. DX10, GL3); but the core issue hasn’t been addressed directly.
- Extensions have always been controversial, but they are being used to fill the gaps between the industry’s state of the art and OpenGL standard. The ARB is taking a very long time to incorporate them into core. Bindless graphics took 4 years to get promoted to core. If the time gap would be lower, perhaps extensions would be more useful.
- OpenGL lacks multhreaded support.
- OpenGL still has a very annoying pattern: BIND TO EDIT. In order to change or modify a resource, it has to be bound first. This not only clunky, it also hinders multithreading opportunities (state-based systems are *much* harder to parallelize than stateless ones) and introduces obscure errors that are hard to find the real culprit (a routine suddenly fails because the programmer forgot to unbind a resource)
- There’s multiple GLSL compilers out there. There are GL drivers refusing to compile perfectly valid glsl code and GL drivers allowing to compile invalid glsl syntax. Some drivers are better optimizing certain patterns and styles, while other drivers are better at optimizing other styles. And some driver just aren’t good at optimizing glsl. DX got that right, having just one HLSL compiler to rule them all.
- Lack of resource synchronization flags to correctly hint the driver. glMapBuffer problems and its follow up was mentioned in 2007. Allright. I guess it should’ve been fixed by now. Except it was not. Seriously? C’mon! How can we even begin to discuss threading with OpenGL when we can’t even get reliable lock-free synchronization mechanisms between the GPU & CPU? We’re miles ahead from D3D.
Hey! At least Christophe Riccio is pushing bindless graphics to GL. And like him, I too view the “future is MultiDraw all the way, resource indexing, shade code indexing, 10 draw calls per frame max, 1M draws“. Which is why I’m so excited about Mantle (though Riccio doesn’t think that way about Mantle). Nonetheless, if Mantle isn’t supported by other major vendors (NVIDIA, Intel, PowerVR) it’s pretty pointless. We know what happened to Glide. And I don’t know how open Mantle is going to be. If it’s not, I don’t want to trade a wonderful API that depends on Microsoft’s whim to one that depends on AMD’s whims.
I don’t hate OpenGL as a whole!
What I hate are the issues I mention (which are major), but I’m not pro DX and anti GL (nor viceversa). I’d prefer an API that doesn’t depend on Microsoft’s whims (like locking D3D 11.1 to Windows 8.1 with no technical reason to do so), and isn’t locked to a single platform.
Heck, even Linux developers agree that Direct3D 10’s API design is a role model while OpenGL’s state sucks. Too bad Gallium3D doesn’t attract more attention and seems to be dying.
It may have probably been be the best shot at replacing OpenGL and move towards a better API. May be Mantle? May be a better version of OpenGL?
This age is yet another big opportunity for Khronos to put OGL back on track which I hope they don’t miss (again). GL is the only way on mobile (and mobile market is big, in a way that can’t be ignored) and Microsoft is starting to get asleep with no DX12 apparently in the near future. The current HW already works quite different of what DX 11 exposes.
A big change should be coming, but I don’t see it happening, perhaps because it’s coming too slow.