Is everything ok, Nvidia?


I don’t normally post personal business opinions, but I’m going to make an exception here.

Yesterday NVIDIA announced they’ve initiated a patent lawsuit against major mobile vendors, claiming these vendors are profiting from their decade-long R&D.

Christophe Riccio tweeted this move is a confirmation the Tegra K1 is commercial failure, so they went the lawsuit way. May be he’s just joking, or he’s serious.
Personally, I see it as a confirmation I was fearing: NVIDIA is in trouble.

 

Disconnection from graphic devs

There has been a distance between NVIDIA and their developers over the past 6 years. I used to login to their website, the “developer zone” as they called, at least once a month to see their new hotness.
They’ve created Cg, FX Composer, PerfHUD. And oh dear, those were useful back then.
Not only that. Their demos were very good, and their slides section from SIGGRAPH and GDC were always really good. Anyone remember the Human Skin demo? Mesmerizing.

We would often see those colour-coded slides with optimization tips, Green for NVIDIA-only cards, Red for AMD-only cards. But it’s been a long time since I came to their dev. website to get something useful, nor feel they’ve contributed anything meaningful (except for the Tegra Android Pack which is an awesome tool), whereas I keep looking in AMD’s website much more often: Useful slides, useful tips, technical documentation, and their demos (Oh! Leo Demo is still banging in the head of many devs), and their tools! (CodeXL, PerfStudio).

NVIDIA kept moving away from its devs; while AMD kept moving closer. I know so much about the GCN architecture that I can even predict the next AMD-specific GL extensions (like GL_AMD_vertex_shader_viewport_index) or lift of useless restrictions (like the 64kb UBO limit). Why? Because AMD keeps being open, with lots of documentation, GCN performance tips, and more. Not to mention their full spec docs are open for Open Source driver implementations.

What do I know about the Kepler architecture? Nothing. Zero. Zip. Nada. It’s like it’s all a secret. I better not find out how it works. I know when targetting GCN that I should optimize for register pressure. I know I can use random access to the UBOs without worrying about shader constant waterfalling. I know how much a divergence costs. I know that 32-bit integer multiplication is expensive, but I can use bitshifting tricks or 24-bit multiplication.
I know that packing attributes doesn’t do much difference. I know the export costs of each render target format. I know the sampling speed of each texture format for each filtering type.
And I can optimize accordingly. Does any of this apply to Kepler? I have no idea. Nvidia doesn’t tell (I’ve been informed you get some docs after signing some NDAs, written for CUDA development in mind though)

 

AMD is everywhere

What’s even more astonishing is that AMD has put its GCN arch in the XBox One, PS4, and Desktop PC. The Wii U is not GCN, but it’s a somewhat-similar AMD chip. Waaaaaay to cover platforms.

NVIDIA may play the marketing move “we don’t care about those markets, AMD can have them”, but I prefer targetting the GCN architecture because it’s going to be there for at least 5 years; and I friggin’ know how to code for it.

Mantle is clearly part of the strategy, easing porting from consoles into Desktop-GCN, with low level access for console-like performance. That was brilliant.
NVIDIA hasn’t always been this way. Back in 2004 through 2008, their GPU Programming Guide was amazing. I kept using it as reference to target NVIDIA hardware. The G80 has been the king for a long time, and I knew how to take most advantage of it. Where’s the Kepler guide? Why is it so secret all of a sudden?

 

The patents

Let’s talk about the patents. I haven’t read the technical patent, but I can comment on their blogpost.

Those patents include our foundational invention, the GPU, which puts onto a single chip all the functions necessary to process graphics and light up screens

That is sooo vague! First, they didn’t invent the GPU. They were the first to coin the term. Dedicated systems exclusively for graphics rendering were invented by SGI.

Trident, Matrox, Hercules, 3DLabs and ATI were all graphics rendering chip makers that came before Nvidia.

Yes, they were the first to move vertex transform and lighting processing out of the CPU into the GPU with the original GeForce. But moving pre-existing algorithms written in C and x86 assembly into the same silicon as the rasterizer can hardly qualify as an enforceable patent.

our invention of programmable shading, which allows non-experts to program sophisticated graphic

First, as far as I know programmable shading was a joint effort with Microsoft. I would like to know how they feel about this. We could also argue the RSP (Reality Signal Processor) present in the Nintendo 64 was the first programmable shading chip. And guess who invented it. Oh yes, yes! It was SGI again.

Second, I should change my job title to “non-expert graphics guy”, because apparently, you don’t need much expertise to write a programmable shader (sarcasm)

Third, NVIDIA is a member of Khronos, the entity that governs over OpenGL (aka “the board”).

The extensions that introduce programmable shading into OpenGL, ARB_vertex_shader & ARB_fragment_shader say “IP Status:  As described in the Contributor License, which can be found at http://www.3dlabs.com/support/developer/ogl2/specs/3dlabs_contributor.pdf

Too bad the link is dead. But it says 3DLabs there. Not Nvidia.

The extension that introduce GLSL in OpenGL, ARB_shading_language_100, says “IP Status: Microsoft claims to own unspecified intellectual property related to programmable shading.

It says Microsoft. Not Nvidia.

Where am I going with this? Nvidia is a Khronos member. They had 10+ years to speak up. They didn’t. This is an extreme demonstration of bad faith: Being in a position of influencing the OpenGL specs, which describes how programmable shading must work, that other (competing) vendors are allowed to adopt, without ever telling them Nvidia owns a patent over these specs they helped writing.

Really, really bad faith. Let’s make a trap, put the bait, hide the trap, and wait for the lawsuit.

There are many other sections of the OpenGL and OpenGL ES specification that may enter in conflict with these patents.

our invention of unified shaders, which allow every processing unit in the GPU to be used for different purposes

They might be onto something there. It’s true they were the first with the GeForce 8; but there are two definitions of “unified shaders”:

  1. Microsoft, for simplicity, unified the instruction set for vertex and pixel shaders for Shader Model 4.0 used in Direct3D 10. Thus instructions used in vertex shaders would be the same as in the pixel shaders; their only difference would be the input and output; and perhaps some stage-specific instruction that couldn’t be available in the other stages.
  2. Nvidia, followed the same path with their hardware to match Direct3D 10. Since the instructions were the same; it’s logical to deduce you could design the chip to process both vertex and pixel shaders. This came with the added bonus that now the GPU could do load balancing, so pixel-shader-heavy applications would use the same amount of chip resources as vertex-shader-heavy ones; whereas previous generations had idle units if one stage became a bottleneck. In other words, higher performance.

Whether Nvidia came up with the idea and Microsoft made the API, or Microsoft came up with the idea and Nvidia merely adapted the hardware to match the API; I don’t know.

I always thought unified shaders were natural evolution of hardware design. The first time I saw VTF (Vertex Texture Fetch) in shader model 3.0; I thought “wow that’s awesome, with this feature I can do in the vertex shaders almost the same I can do in a pixel shader; I hope some day this becomes really true”.

ATI had introduced their R2VB (Render To Vertex Buffer) hack which moved vertex shading processing into the pixel shader unit, making the pixel shaders some sort of “unified shaders units” as they would end up doing both vertex and pixel shading work. It was not exactly the same, and setting up R2VB was too troublesome for practical use (not to mention it would only work on ATI cards).

But the direction heading towards a unified architecture was clear.

Again, like I said, Nvidia might be onto something here. But I’m not sure.

Update: I’ve been pointed out the Xenos GPU (which was in the XBox360, developed by ATI) had a unified shader architecture, and came earlier than the GeForce 8 (NV’s first unified shader arch. gpu).

our invention of multithreaded parallel processing in GPUs, which enables processing to occur concurrently on separate threads while accessing the same memory and other resources.
I would have to see the patent itself. Because this brief description fits any SMP multiprocessor in the world.

The patents put gaming and 3D graphics in jeopardy

What’s interesting is that these very same patents apply to AMD’s Radeon and Intel’s HD GPUs as well. So, are they next? Or were they paying a fee and we never knew?

I can understand that lawyers have to pile up a high number of patents in hopes to just get one of them recognized as an actual infrigiment by a judge, even if it’s the wrong one (i.e. one that is blatantly unenforceable by any engineer with some degree of common sense); but this could have an impact in the industry. They’re basically saying they have a monopoly on GPUs. If you want to manufacture GPUs, you have to pay them.

 

Final words

What I can only deduce from all this is that Nvidia’s future is uncertain, and not looking bright. They’re failing in mobile land. It’s a shame. We had hopes to see OpenGL (not ES) in mobile, decent drivers. And TADP is just great. It even allows debugging Android NDK application through Visual Studio almost as the same as I debug a C++ Windows application; that is an amazing feat.

On the other fronts, AMD is clearly cornerning NV with their GCN-everywhere + Mantle + Openness strategy. On the desktop market they still have brand recognition as the #1 GPU maker (and NV drivers are certainly better than AMD’s). But if they don’t act fast; they’ll eventually lose that comfortable position.
What I can deduce from this lawsuit is that they’re panicking.