The Strange Evolution of OpenGL Part 2

By Shamus Posted Sunday Apr 5, 2015

Filed under: Programming 38 comments

In case you missed the first entry: We’re here to talk about how OpenGL has changed and why that’s importantTo me, anyway.. The odd colorful screenshots are from one of my many half-baked OpenGL-based engine prototypes, presented here simply to break up the monotony of the words.

Before we can talk about where OpenGL went, we have to talk about where it started. So let’s talk about how rendering works on a fundamental level.

The Basics

It’s a good thing Pythagoras is dead, or he’d be insufferably smug right now.
It’s a good thing Pythagoras is dead, or he’d be insufferably smug right now.

Our videogames are based on triangles. Everything is triangles. Even cube-based Minecraft is made by creating rectangles from pairs of triangles. Even text and icons on-screen are made by putting pictures of words and symbols onto triangle pairs.

I suppose there are a few other ways our graphics technology might have developed if history had played out just a little differently. We might have wound up with voxels, for example. But triangles was always a likely path for us to take.

Why triangles and not rectangles? Because triangles are mathematically more fundamental than rectangles. You can make rectanglesOr any other 2D polygon, for that matter. from triangles, but you can’t make triangles from rectangles. Computers hate ambiguity, and there’s a certain ambiguity to rendering with rectangles. Like your geometry teacher was busy telling you while you were drawing Power Rangers in your notebook, “3 points form a plane.” More informally, a 3-legged stool is inherently stable but a 4-legged stool might wobble. That wobble introduces a certain ambiguity. If you try to draw a rectangle and all 4 points aren’t on the same plane, that wobble needs to be resolved one way or another before it can begin drawing. And it turns out that the solution to that problem involves breaking the rectangle… into triangles.

The point is: It’s triangles all the way down.

3D rendering today consists of taking a bunch of 3D data and mushing it down to the 2D plane of your screen. From the Minecraft image above, you can see a cube is made from rectangles and rectangles are made from triangles. So while your brain uses its magical perspective detection to see the 3D world, to the computer it’s just a big pile of triangles sitting next to each other, with no more meaning than this lone triangle:

For some reason nobody is interested in my design for a graphics card that’s based on rendering with tetradecagons.
For some reason nobody is interested in my design for a graphics card that’s based on rendering with tetradecagons.

So whenever we draw polygons, we need to specify specify points in groups of threes. If you’re a hardcore trigonometry badassFor the purposes of this discussion, this is not an oxymoron. then I guess you can do it all yourself. Just draw nothing but 2D triangles. But over the years we’ve invented a bunch of techniques to do this for you, and the graphics hardware has been specially designed to do that sort of work really efficiently.

So you give OpenGL 3 points. Assuming those points wind up on the screenAnd not off to one side or behind the camera. then we get a triangle. Once the triangle is calculated, the graphics hardware fills in the space with pixels. This is called rasterization.

We don’t generally want to fill those pixels in with a solid color. I mean, you can, but you wind up with something like this:

I don’t want to come off like some kind of graphics snob, but this probably isn’t good enough to ship.
I don’t want to come off like some kind of graphics snob, but this probably isn’t good enough to ship.

So while you’re defining the positions of your vertices, you can also give each one a color.

Here is how to meet the public’s insatiable demand for red/green/blue triangles. You’re welcome.
Here is how to meet the public’s insatiable demand for red/green/blue triangles. You’re welcome.

That’s nice. Gradient colors are much better than flat colors. It’s enough to give you something that looks like Race the Sun. But if you’re not going for a minimalist style like that then you probably want to use a texture map.

I imagine most people understand what a texture map is, even if they don’t get how it works. But for the sake of completeness: A texture map is when you take an image and use it to color your triangles like so:

I wish I’d drawn this diagram a bit differently. Imagine the Mona Lisa not once, but as an infinite plane of that face repeating over and over like endless tiling wallpaper. Now picture putting the A, B, and C points anywhere you like on that plane. This will, of course, form a triangle. The image within that triangle will be mapped to the shape of the 3D triangle we’re drawing on screen. (Even if they’re wildly different proportions.)
I wish I’d drawn this diagram a bit differently. Imagine the Mona Lisa not once, but as an infinite plane of that face repeating over and over like endless tiling wallpaper. Now picture putting the A, B, and C points anywhere you like on that plane. This will, of course, form a triangle. The image within that triangle will be mapped to the shape of the 3D triangle we’re drawing on screen. (Even if they’re wildly different proportions.)

The math to do this is actually pretty straightforward, thanks to the use of triangles.

It was common to combine the vertex coloring with texture mapping. Make some corners of the triangle light and some dark, and it will fade between the two. Thus you get “lighting” on your textured walls.

Using nothing but these tools you could easilyBy “easily” I mean: Assuming you’ve got the budget, a skilled team, and you’re one of the few people at the time who understood all this stuff. And assuming you didn’t have anything else to do, I guess. make a AAA game in the 1990’s. I’m pretty sure this is everything you need to make Quake work. (Although Quake did shadows by storing the shadows as texture maps. So it would draw the whole world at full brightness, and then draw over the same polygons again with the shadows, thus making the appropriate areas dark. That’s why the shadows often looked kind of rough and jagged. The shadow textures were very low resolution.)

To be clear: The original Quake was actually a bit too early to ship with OpenGL support, and it wasn’t hardware accelerated at first. It’s complicated. But ignoring the strange way OpenGL was added to the game later, it serves as a really interesting snapshot of the technology of the day.

The big blobby shadows in that corner are probably a couple of black pixels stretched over a couple of meters worth of wall. This image was taken using the much newer GL Quake. Modern graphics cards work pretty hard to smooth these edges out as much as possible. I remember the effect being a lot more obvious and ugly back in the day.
The big blobby shadows in that corner are probably a couple of black pixels stretched over a couple of meters worth of wall. This image was taken using the much newer GL Quake. Modern graphics cards work pretty hard to smooth these edges out as much as possible. I remember the effect being a lot more obvious and ugly back in the day.

And that’s it. That’s 90% of everything you need to know about how “classic” OpenGL worked.

The code to do this was pretty simple:

1
2
3
4
5
6
7
8
9
10
11
//  This will make a triangle shaped like so:
//  3---2
//  |  /
//  | /
//  |/
//  1
glBegin (GL_TRIANGLES);
glVertex3f (0.0, 0.0, 0.0);
glVertex3f (1.0, 1.0, 0.0);
glVertex3f (0.0, 1.0, 0.0);
glEnd ();

That code makes three vertices. Which makes one triangle. All I did was define three positions. Let’s add some color data:

1
2
3
4
5
6
7
glBegin (GL_TRIANGLES);
glColor3f (1, 0, 0); //red
glVertex3f (0.0, 0.0, 0.0);
glColor3f (0, 1, 0); //green
glVertex3f (1.0, 1.0, 0.0);
glVertex3f (0.0, 1.0, 0.0);
glEnd ();

In line two I tell OpenGL that I’m setting the color to red. Then in line 3 I give it a vertex. When it’s finally rendered, that vertex will be red. In line 4 I change the color to green. You’ll notice I didn’t set a color for the third vertex. When you set a color, it applies to every vertex from that point on, until you change it to something else.

This probably looks like a very “raw” way of making graphics. Obviously we wouldn’t want to try and construct Lara Croft’s face like this, by manually defining thousands and thousands of triangle positions in code.

Next time we’ll talk about how we got from this raw triangle access in the 90’s to the way we do things now.

 

Footnotes:

[1] To me, anyway.

[2] Or any other 2D polygon, for that matter.

[3] For the purposes of this discussion, this is not an oxymoron.

[4] And not off to one side or behind the camera.

[5] By “easily” I mean: Assuming you’ve got the budget, a skilled team, and you’re one of the few people at the time who understood all this stuff. And assuming you didn’t have anything else to do, I guess.



From The Archives:
 

38 thoughts on “The Strange Evolution of OpenGL Part 2

  1. Da Mage says:

    From my understanding, the whole reason behind moving to OpenGL’s current architecture is that is much better for massive parallelisation. The old immediate mode drawing like that means the GPU has no idea what is going to happen next, whereas with the new architecture (I personally like the improvements 3.x provided) means that after the setup, the entire buffer of vertex data can distributed. It’s really an ongoing battle between ease of use and speed of rendering….and performance normally wins.

    For a research project I had to apply multi-threading to a software renderer and ran into the same problems and ended up with a similar architecture to OpenGL. Basically ended up with a layer on top of the existing code that divided all the rendering up between the 4 or so threads that the CPU could provide.

    EDIT: I also love these graphics based blogs, helped me figure out a few things when I was just getting started. Looking forward to the next one.

    1. bloodsquirrel says:

      I’m pretty sure it has more to do with reducing function calls. Making a function call every time you need to add a vertex, then another for each point of data (color, texture, etc), is horribly inefficient compared to just passing off an array to the graphics card and telling it how to interpret the data.

      The old way was also much less flexible. With the new way, you can send arbitrary data along with your vertices that your shaders can do arbitrary things with.

      1. lethal_guitar says:

        Yes, that is indeed a large factor. The overhead of a function call might seem small, but if you render thousands of polygons, it adds up to a lot. You don’t want to be spending all your CPU time just telling the GPU what to draw.

        Even with the “new” API, you always try to keep the number of draw calls to a minimum, e.g. using techniques like H/W instancing.

        It’s quite old by now, but this presentation has some interesting infos on the problem: http://www.nvidia.com/docs/IO/8228/BatchBatchBatch.pdf

        1. Bryan says:

          I think it’s less “function calls” and more what those function calls actually have to do. Just calling glColor3f isn’t that bad: it’ll set some variable in the context somewhere to the color you give it. Copy three floats; shrug.

          Calling glVertex3f, though, on a system with hardware acceleration, is going to either involve sending those same three floats plus the three floats for the vertex coordinates, through the userkernel interface, and from there to the graphics card. Or it’s going to copy the three floats for the vertex coordinates (plus the current color) to another buffer, waiting for the glEnd call, which will pass them across the userkernel interface.

          But that mode switch, from user to kernel, is *slow* compared to copying memory around. Modern CPUs make it somewhat faster, but the CPU still has to reload its entire address translation table, fix up the current stack, then run a bunch of kernel-mode code to pull the arguments from wherever they got put, and interpret them. (And in 32-bit mode, it has to do a bunch of steps with segment selectors and interrupt descriptor tables — the way to transition from user to kernel is often via a software interrupt instruction — and whatnot else, too.)

          On the other hand, even that isn’t too bad compared to the performance problems introduced by having enormous chunks of the GPU sit idle. If it can do the required math on thousands of vertices at once (to get them translated, rotated, scaled, and perspective-projected onto the screen), then spoonfeeding it one vertex at a time (or if it happens on glEnd, three vertices at a time; either way) is going to make huge chunks of it sit idle most of the time.

          Instancing is one pretty-good way to fix this if you have access to that extension.

          Note also that the in-OpenGL matrix stack stuff is all done on the CPU. The GPU is really good at doing lots of parallel vector/matrix math, but because the immediate-mode calls have so much state in the matrix stack, it’s really really hard to parallelize them well. This is, I think, the biggest reason that whole stack is gone: it’s extremely convenient, but it’s *very very* hard to make it run stupidly-parallel.

          (The slowness of immediate mode drawing is also actually why I think — though a lot of people don’t seem to agree — that when your scene has mostly static points, it makes a lot more sense to send them down once, loaded into a buffer once and reused for every render call. If objects are moving around a lot then that’s pretty much impossible, but for something like Frontier I think it makes a lot of sense. …As long as the buffer fits in GPU memory of course.)

          1. Bloodsquirrel says:

            Don’t underestimate function calls. Remember, you’re making six of them for every polygon you draw. In an age where you’re drawing 100,000 poly models on the screen that’s a hell of a lot of calls to be used up just to define a character model. Whether you can do it in parallel or not really doesn’t matter much- the calls are still going to be made from CPU space (since it’s your code, not the card’s firmware making them), and what you really want is to push the entire thing off to the GPU in one go.

            1. Bryan says:

              …Yes, but six function calls is *nothing* compared to either three or one user->kernel->user transitions.

              Function calls that only touch userspace memory are *extremely* cheap compared to all the processing that needs to be done to allow the kernel code to run, and to return to a user context after…

              And those 6 calls are even less when compared to the cost of leaving hundreds or thousands of GPU workers idle because the CPU can’t keep it fed with data. Which is the big problem with immediate mode in my opinion (the CPU is sequential by default, as you said).

              1. Alex says:

                Even though you are writing immediate mode code, your OGL driver isn’t doing KM calls or GPU work for each IM call. Instead it’s copying the data you pass to the IM functions into a vertex buffer. Eventually you’ll have given it enough data that it’s worth setting up the HW pipe or you’ll call GL_FINISH or do a present (or a lock). At that point it will submit the work you have queued up to the KM scheduler and the GPU.

                The main draw back of IM really is the function call overhead. You spend more time on the CPU building up the data to submit for the draw than the GPU spends executing the draw. There are some other draw backs, like not being able to use the vertex cache, but you are almost certainly going to be CPU limited in this case, so that stuff won’t matter.

  2. Nicholas Hayes says:

    I think you mean strange, not strage.

    This series is getting interesting

  3. Shamus, the tooltip/hover text for the images are kind of messed up, the ‘ character is shown as & # 8 2 1 7 ; instead.

    1. CJ Kerr says:

      Yeah, WordPress is helpfully html-encoding the html-encoded special characters.

      I wrote a GreaseMonkey / TamperMonkey / Userscript thing to “fix” it on the client side – http://pastebin.com/ZAqUuA2R

      This only addresses the most common 3 characters, because I’m too lazy to work out if there’s some generalisable system for converting html entities to unicode.

      The proper fix would be for Shamus to work out why WordPress is trying to html-encode the title attribute on his images.

  4. CJ Kerr says:

    In case Shamus just missed it last time: I rewrote the footnote Javascript using JQuery. The code is at http://pastebin.com/XyZLWgSK

    Advantages: This version will close the footnote if you click the reference link again. That’s about it, from a user-facing perspective.

  5. V8_Ninja says:

    Thanks for making these blog posts, Shamus. They’re making a lot more sense then any of the other stuff I’ve read on OpenGL.

  6. kikito says:

    An elegant design, for a more civilized age.

    I suggest making a 20-sided die in OpenGL at some point during this series.

    1. Asimech says:

      When did Shamus mention programming in Lisp?

  7. Cuthalion says:

    And this is just about as far as I’ve gone in my own game. I also have a fragment shader to do normal-based lighting, and I’ve made some shortcut functions to draw rectangles, circles, clipped stuff, etc. But I have yet to move to the New Ways.

    1. Richard says:

      The New Ways are simultaneously easier to understand, faster, clearer and harder to learn.

      The first thing you must learn is that you do not have one computer on your desk/lap/phone.
      You have two computers, and they work in completely different ways.

      1. AileTheAlien says:

        I haven’t written any of this engine-level stuff since college, and I’m saner for it. I’m only doing basic stuff in 2D/3D, and I still feel the complication. So, I stick with PyGame for 2D, and Panda3D for the…3D. :P

        Trying to make even simple games on my days off is about all I can handle. I can only imagine the budgets needed to try and do anything from scratch. :S

  8. WILL says:

    It had to change though – I wonder if DirectX is more strict with what rendering functions or pipeline you’re allowed to use. OpenGL still supports this primitive way of sending vertices while it really should not.

    EDIT: That said, I found a good tutorial for modern OpenGL and managed to make a pretty decent looking and robust deffered rendering engine. When you get your head around Framebuffers and how the new pipeline works and you do a good C++ setup to handle all the objects, you can do some really great stuff.

    1. Nixitur says:

      I disagree.
      Even if you shouldn’t use it in any “serious”, big project, you should still be able to do it precisely for things like this series is doing. For learning.
      If you’re gonna go “Now we’re gonna learn how to draw triangles.” and you first have to overcome the hurdle of framebuffers, pipelines and whatnot before you even see the first triangle, you’re going to put off and annoy a lot of people.
      It’s clear that it’s a bad idea to use it in any large-ish projects, but the new and improved way is so much harder to learn and even understand that I’m pretty sure most people would rather not bother.
      Once you get people drawing triangles, then you can introduce them to the more complicated, but ultimately better way.

      1. Zukhramm says:

        I really think the simplicity of the old pipeline is overestimated by those who already know it. Taking a detour through another hard way of rendering just to get some triangles out, just to throw that knowledge away when moving on seems a waste of time.

        It is true that the overhead before getting to your first triangles is larger in the modern pipeline but in the long run that is fairly minor and a better way around that would be to give learners a little library that they can gradually replace with their own code rather than first teaching them something completely irrelevant.

      2. Bloodsquirrel says:

        A great deal of the difficulty I had in learning OpenGL came from trying to sort out the old way from the new way.

        The new way is more difficult than the old way to learn, but it would still be much easier if it was the only way and you didn’t have to sort out which was the old way and which was the new way.

        The poster above me said you were overestimating the simplicity of the old way, but the bigger problem is that you’re overestimating how easy it is for people who are just starting to learn this stuff to even recognize that there’s an old way and a new way in the first place, and to properly separate the two.

        1. Zukhramm says:

          This too. There are multiple cases that are really hard to distinguish between. You find two pieces of tutorial explaining the same thing and doing it differently. Is this a case of multiple valid ways? Are they different because they’re going to be used differently later in the tutorial? Is one of them outdated? Or is one just plain wrong?

  9. Kian says:

    I never worked this way. My first serious foray into OpenGL was with OpenGL 3.2, for which I bought a copy of the OpenGL Super Bible 5th Edition/ It is basically an OpenGL tutorial that focuses on the new ways. It’s awesome for learning, it comes with example exercises and shaders and stuff. Internet tutorials are fine most of the time, but it sometimes pays to invest in your learning.

  10. RCN says:

    You know, to this day I’m still scratching my head about what are Voxels.

    The Wikipedia entry didn’t help much. It was one of those articles written by coders, for coders, with no intention of letting anyone outside the club understanding what was said.

    And it is not like voxels really died, they just didn’t survive the requirements of gaming, but other visual programs seem to have thrived with voxels. Any chance of a more in-depth article about them? Because from what I could glance out of your other article about voxels, they do seem to have their own advantages, and I’m not entirely sure why gaming couldn’t use both voxels and polygons at the same time.

    1. Da Mage says:

      Here is a brief description of voxels from my point of view.

      Voxels are a representation that is more like the real world, each voxel represents an atom and objects are made up of these voxels arranged into a shape. SInce we cannot draw voxels which are as small as atoms (since there would be too many) we make them fairly large. Often cubes are used instead of spheres as cubes can fit together nicely without gaps.

      1. RCN says:

        Is the way the water physics is rendered on NVidia’s FleX tech demo a representation of Voxels? (thousands of small-ish spheres being simulated in real time and the engine kinda filling the gaps) Or am I completely off the mark here?

        1. Da Mage says:

          Yes, particle effects like water simulation have much more in common with voxels then traditional triangle rendering. They arent exactly voxels, but it’s very similar.

          In triangle rendering we define each point of the triangle and then fill in the flat space between those points with colour. In voxel rendering we define points with properties (such as what colour it is) and when all the points are put on the screen it looks like an object.

          1. RCN says:

            It does seem like an awfully large amount of points to keep track of.

            Wait… so, voxels are actually volumetric… does that mean that voxel graphics are actually “filled”? Instead of the way polygons work, where if you look past the surface plane it is just empty and the texture is just one-way?

            Because it seems voxels would be awesome for destructive or deformative terrain and buildings. But how do you make a texture with voxels?

            1. Ventus says:

              I’m not exactly an expert on all this but people do use voxels in that way for storing the data. It then gets turn into polygons to render as we do normally but I’m not sure if there’s a difference between “rendering voxels” natively or taking your voxels, turning that data into polygons and then rendering.

              I also don’t know how accurate this is but I always imagined it like this; 2D images are either bitmaps which store the data as points or dots or vectors which are defined mathematically. So voxels are like 3D bitmaps, storing things in points whereas polygons are like vectors which are stored mathematically (I know both use maths but… eugh, I’m dumb okay, I have no idea if this is right or not :P)

              As for texturing voxels… I suppose, theoretically, you could have high enough density that if you colour each one differently it will be textured… applying a conventional “texturing” approach would mostly likely require “polygonalization” as an intermediary step but I dunno.

      2. Jeff R says:

        Anyone actually tried getting splitting the difference between cubes and spheres and going with Wearie-Phelan bubbles?

    2. AileTheAlien says:

      Voxels are 3D pixels.
      That’s it.

      The main benefit that I can see, is that you can have data in 3D instead of 2D. So, you can easily map from your 3D object to the 3D pixels, without having to do a bunch of trigonometry. The effort of saving this math isn’t a whole lot, however.* On the other hand, there’s an order of magnitude more voxels in a scene than pixels, so you instantly have much harder requirements for displaying voxels rather than pixels. Hence, they don’t have many practical uses in gaming, where computations budgets are already tight.

      They’re more useful in science, where you often have volumetric data, and lesser requirements for the use/display of said data. i.e. You’re not running AI, or pathfinding, or trying to do shiny particle effects. You’re just dumping the data to the screen.

      * I mean, how often do you think of a game in terms of raw pixels? It’s much more useful to think of objects that have images associated with them.

      1. Paul Spooner says:

        And, of course, you can always write a geometry shader to convert your triangles into voxels.

        1. Groboclown says:

          Back in the bad old days, Voxels were simplified down to make some really fast fill rates. It was essentially just a very smoothed out height map, and the rendering engine specialized in drawing vertical lines of changing colors.

  11. Groboclown says:

    The rendering with Quake was much more difficult than this. Way, way, way back in 1996, Michael Abrash published an article in Dr. Dobbs journal which was an amazing peek into what kind of issues they were running into.

    I found a copy of that article archived off here:

    http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/ramblings-in-realtime-chapter-1-inside-quake-r981

    Of course, that article isn’t talking about pushing triangles to the screen; it’s about pushing virtual walls to the part that’s looking to push triangles to the screen.

  12. Tim Keating says:

    “Why triangles and not rectangles? Because triangles are mathematically more fundamental than rectangles.”

    I don’t know this for certain, but I always thought triangles were used because any three points in space comprise a valid triangle, and that isn’t true of quads.

    Edit: which is essentially the same thing you said in the latter half of that paragraph. That will teach me not to read all the way through before commenting.

    1. guy says:

      I’m pretty sure the primary reason is because you can make any regular polygon out of triangles. The plane problem could be resolved somewhere in the graphics library, but making a hexagon out of rectangles would be a very involved process.

    2. WJS says:

      Just want to correct that to “Any three distinct points in space form a valid triangle”. If you accidentally have two points at the same place, your triangle is degenerate, and not useful for much.

      1. Daemian Lucifer says:

        Well if you want to be super pedantic,then its “any three points that arent on the same line“.Because if they were on the same line,you would have a line,not a triangle.That stipulation also makes sure that they are distinct,because if two of them were to overlap,then all three would be on the same line.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Groboclown Cancel reply

Your email address will not be published.