Where do I start graphics programming? 5


Sometimes I get this question. And since it happens frequent enough, I decided I should write it publicly so that others can benefit, and also so I don’t have to repeat myself every time 🙂

 

In case you don’t know me, I’m self taught. I started doing graphics programming at age 14. My first experience was Visual C++ 6.0 and dived straight into C, C++ and assembly and came out just fine. The basics I learnt by reading a random PDF I found back then “Aprenda C++ como si estuviera en primero” (in Spanish) which was a very simple introduction to the language in 87 pages. What a variable is, what an enum is. How to do basic hello world.

Then I learnt a lot by trial and error. I started with several open source Glide Wrappers which were popular at the time. I used the debugger a lot to see line by line how the variables evolved. I would often remove some snippet of code to see what would happen if I took it out. I also played a lot with XviD’s source code.

Back then I had the DirectX SDK 7.0 samples and documentation to play with and I learnt a lot from it. In particular, Mathematics of Direct3D lighting fascinated me. I ended up writing my own TnL C library written in SSE assembly. It wasn’t very useful and I haven’t really updated it in more than a decade, but it helped a lot paving my foundations for when I came into contact with Vertex & Pixel Shaders. I was shocked that what took me an entire summer vacation and lots of assembly instructions (e.g. a matrix multiplication) could be done in one line of vertex shader code.

 

Writing in assembly was a very enlightening experience. For example I discovered on my own what aliasing means, and why __restrict modifier is so important. In particular, one of my “SSE optimized” implementations of matrix multiplications looked something like this:

void multiply_matrix( float *out_result, float *matrix_a, float *matrix_b );

In my first implementation, I quickly discovered that if I called “multiply_matrix( a, a, b )”; the result would be wrong. This is because the result would be written into “a” while some values from it were still required for reading.

I then tried to fix it to properly work in this scenario. But turns out a lot more instructions were needed! I needed to first make a copy of A, or use temporary row copies, or be very clever with the register pressure, etc, etc.

Now it worked correctly if I called multiply_matrix( a, a, b ); but I was heavily penalizing the performance of operations such as multiply_matrix( c, a, b ) unnecessarily! Another solution was to document the function to clarify you couldn’t call multiply_matrix( a, a, b ). i.e. “Don’t do that!”

Without reading or ever hearing what memory aliasing is or nothing about the __restrict keyword, I was instinctively learning about it. So the day I found out about memory aliasing and the __restrict keyword, it was all natural for me.

 

And that’s how I’ve learnt until this day. I’m tinkering with low level stuff all the time and whenever I can, to find how things actually work. And by the time I get my hands on some higher level literature about it, it all feels natural and obvious.

 

Literature dump

Oh so you came here for resources right? How do I get my hands on that higher level literature about it?

First, always scoop for GDC & SIGGRAPH papers. GDC as a free vault section. SIGGRAPH doesn’t, but many authors who often publish on SIGGRAPH will have drafts publicly available from their website. Oh and more thing: DOWNLOAD EVERYTHING YOU FIND. The academic or personal website you’re visiting today may be down tomorrow. A university decides to migrate their servers, the author died, a company bankrupted, and all those juicy links go down.

Second, many graphics devs are on twitter and tweet often about this stuff with links to papers. Just take a look at the people I follow (note: it seems you need to be logged in to twitter to see it). Many of them announce themselves as Graphics Engineers of X company or working for AMD/NVIDIA/Intel, etc. You don’t need to be a genius to get it. If in doubt, take a quick glance at their timeline.

Also many whitepapers/presentations often end up with the author’s twitter handle at the end. And don’t forget to checkout the bibliography at the end. One thing leads to another.

Third, aside from learning trial and error, my main source of learning materials has been:

 

Very technical about GPUs.

AMD has made their docs public. So public that there are Open Source drivers for Linux. Intel does the same and even pours money into Linux driver development. Even better, now with Vulkan several vendors seem to be publishing PDFs about how to better optimize for their hardware, which contains juicy stuff.

 

I’m missing a lot of links. I can’t include them all. But take in mind some of these links contain massive links to more blogs and resources. And I’m sorry if I didn’t include your blog in this list, it probably deserves to be here.

By now, you have a lot of resources to start digging. I can’t do all the job for you. You have to do it on your own. I’m only giving you a bootstrap.

I you don’t like this post, then go visit Stephanie Hurblurt’s posts. She has a different approach on learning that bores me (sorry Steph’s!). I’m more of the learn-on-your-own kind of guy, she prefers mentoring. But not everyone is the same. She also keeps a twitter feed full of link with more learning resources.

 

Cheers!


5 thoughts on “Where do I start graphics programming?

  • Gabriel Konat

    I think this post is a horrible answer to the question “Where do I start graphics programming?”. While trial and error is a very powerful way to learn things, and in the early days of graphics programming was probably required due to lack of literature and learning sources, it is also very inefficient. Many people have done the trial and error for you, and have written books or other learning sources, and this is where you should _start_. After getting started, I think it is fine to either dive into more advanced literature or do some trial and error if you prefer that.

    If I had to point to a single resource as an answer to “Where do I start graphics programming?”, it would be https://learnopengl.com/. Actually doing graphics programming is a great way to learn (as opposed to just reading), and this site keeps the ‘error’ part of trial and error significantly lower than doing random stuff or reading random blog posts. While it teaches you graphics programming with OpenGL in C++, you can easily apply the learned techniques to other graphics APIs or programming languages as well. Also, if you’re not comfortable with programming (in C++ yet), I suggest you try to learn at least the basics of that first.

    • Matias Post author

      I just want to say that back then I had a lot of resource material to read from. I had the Glide2x & Glide3x documentation manual (which was like 1000 pages each), the DirectX SDK docs have always been phenomenal and had great samples.

      Yeah, there wasn’t Youtube, and there was no free Unity & UE4. Today is even easier to start into making games.

      But my point is that one thing is to know about something and another thing is to really be aware about that something. Back to the __restrict example: One thing is to read about it (which may not make a lot of sense; or even if you understanding it very well you may just think “ok, I’ll keep it in mind” and then forget), another is to have it engraved into your mind because of a past experience. I take aliasing very seriously.
      Experimenting, trial and error gives you the understanding you need to be more efficient. For example back when I made a mistake (http://www.ogre3d.org/forums/viewtopic.php?f=25&t=85844#p536250) I knew exactly what went wrong after 30 seconds, something that could’ve taken a lot of time to many other programmers. The problem? GCC was removing all of my code because it thought it had no side effects due to a mistake I did.
      Granted, I screwed up; but it takes lots of years of trial and error to reach a point where you just look at the code and know what’s wrong right away. That’s not something acquired by just reading.

      Also I don’t want to give the impression I don’t read. I do *tons* of reading. Whenever I’m not programming or designing, I’m getting up to speed with the latest documentation, latest specs, newest GDC & SIGGRAPH presentations, etc.

      learnopengl.com looks really good, but my main concern about it is that it doesn’t teach AZDO way of doing things, or how to write an efficient graphics engine.

  • Luciano

    Hi I found this article very “juicy”! Iam a self taught too.
    i always try to understand how things works going low level. I can’t believe that the fastest way to cycle still the “for’ untill today! Anyway thanks for sharing.

Comments are closed.