Project Unearth Part 3: Relief Picture

By Shamus Posted Monday Jun 30, 2014

Filed under: Programming 40 comments

Normal mapping is the next step. It’s been a mainstay of AAA graphics since 2004A banner year for technology. Both Half-Life 2 and Doom 3 happened that year. While they weren’t the first games to do it, both were really great showcases for normal-mapping. and is one of the rare effects that I think justifies the horsepower that goes into it. I’ve never been stunned by depth of field effects, fullscreen anti-aliasing, or a lot of the other fancy-pants effects that required a new graphics card generation just for a bit of “Hm. That’s cool I guess.” visual flair. But normal mapping? Normal mapping is an honestly clever technique that solves all kind of problems.

In the past I’ve sloppily used the terms “bump map” and “normal map” interchangeably. I’ve always disliked talking about “normal maps” when doing these non-technical writeups because I didn’t want to have to stop every time and explain what a “surface normal” was. Without clarification, the reader is likely to assume a “normal map” has something to do with making things appear normal. Perhaps there are abnormal maps? The term “bump map” is just easier for the reader to graspAlso because I used to get them mixed up all the time. Nothing is better at helping you nail down concepts like having to explain them to someone else..

But now we’re working directly with the concept, so after committing years of sloppy terminology abuse we’re going to make an effort to get things right.

I’ve explained normals recently, so go read that if you want the long explanation.

unearth_normal1.jpg

So the problem is that we want worlds with lots of detail. If Gordon Freeman walks up to a wall, we expect the bricks on that wall to look 3D. If a light is shining down the wall, it should strike the tops of the bricks and not the underside. But we don’t want to have our artists build thousands and thousands of bricks just to create a simple room. Even if the graphics hardware can handle drawing them, that’s still not a great use of artist time. And even if we had unlimited artists, it would be incredibly difficult to have each and every room in the game contain Pixar-levels of extreme detailActually, a full-poly scene in a videogame would be WORSE than the same scene in a Pixar-type movie. In a movie, the author controls the camera and you can cut corners on the stuff that isn’t viewed up close. In a game, the audience controls the camera so EVERYTHING has to be high detail. for every element in the scene. Even if development costs and rendering power are infinite, you still have to worry about distributing the game, load times, physics systems, memory usage, and a dozen other things that prevent us from solving every problem with MOAR POLYGONZ!

Here Half-Life 2 shows us what the normal map looks like instead of using it to light the scene. I was surprised at how many objects in the scene aren’t normal mapped, here. (Everything that’s not ghostly blue.) Curious that it seems to be the objects with the most detail that lack normal maps. I suppose this was a limitation of the day.
Here Half-Life 2 shows us what the normal map looks like instead of using it to light the scene. I was surprised at how many objects in the scene aren’t normal mapped, here. (Everything that’s not ghostly blue.) Curious that it seems to be the objects with the most detail that lack normal maps. I suppose this was a limitation of the day.

So what we do is use a special texture called a normal map. It lines up with the texture (the picture of the bricks we want to draw) and describes the shape of our fake bricks. Instead of using the shape of the (perfectly flat) wall to light the texture, we use the normal map.

This should give you an idea of just how big a difference normal maps can make. Check out the keyboard, face, and hands:

On the right is the normal-mapped view like we get in the completed game. Left shows us the “real” polygon structure. Click for Enlargified version.
On the right is the normal-mapped view like we get in the completed game. Left shows us the “real” polygon structure. Click for Enlargified version.

This is from Doom 3. You can see that in terms of polygon density, we’re not that far ahead of games like Deus Ex. Mitten hands. Triangle noses. Boxy scenery. But because of the normal maps this 2004 game looks closer to 2014 than it does to its quasi-contemporaries of the preceding years.

So we’re using color values to store spatial data. Because of this, normal maps look kind of odd. Each color channel represents an axis. If we think of the normal map as a tile on the floor, then red is west-to-east. So if a pixel is facing west, there’s no red and if it’s facing to the east it has maximum red. The green channel does the same for north and south. The blue axis points out of the texture map, going “up”. This is why normal maps tend to look so blue. A perfectly flat tile would be solid blue, since all the normals would point up.

unearth_normal3.jpg

Another example, just to beat this point to death. On the left we see the real 3D geometry we’re trying to represent. Our “wall of bricks”, as it were. The middle is the resulting normal map. And on the right is what you get in the game: A flat surface that’s shaded as if it was 3D. I’ll add that I love this particular normal map. Its extreme surfaces, large shapes, and lack of symmetry make it ideal for testing.

One last note before I get started is that Michael Abrash has said that “normal maps don’t look good” in VR. The whole thrust of the effect is to make a surface look bumpy without having to make the actual bumps, but apparently the illusion is shattered if you’re wearing a headset that gives you stereoscopic vision. Your depth perception kicks in and notices that the surface is actually flat. It goes back to looking like wallpaper, like in the days before 2004. It’s entirely possible that when VR comes we may have to brute-force render those bricks after all. We’ll see what happens as VR matures.

(I’ve ordered an Oculus Dev kit 2. They start shipping in August. I have no idea when I should expect mine.)

But that’s a worry for another day. For now we’re just getting bump mapping working.

The first problem we run into is that we don’t just want to use normal maps on the floor. Above I mentioned that the normal map behaves like a tile on the floor. Unfortunately, this is true even when it’s applied to the walls and ceiling.

Obviously the lighting on the walls makes no sense, but also note how our outward-facing “bumps” have been inverted to depressions on the ceiling. The normal map malfunctions everywhere that isn’t a perfectly flat floor.
Obviously the lighting on the walls makes no sense, but also note how our outward-facing “bumps” have been inverted to depressions on the ceiling. The normal map malfunctions everywhere that isn’t a perfectly flat floor.

The bumps continue pointing up, even when they should be pointing sideways. Now, the obvious thing to do would be to rotate the normals. If we’re doing a wall, then before we do our lighting calculations we can just rotate the normal value 90 degrees and the values will then behave like a wall. This works just fine, but is slow. It means we have to do complex re-orientation of the normal values for every single pixel we draw.

You might remember this diagram from a few weeks ago:

freboot_shaders.jpg

For any given rectangleIt’s actually a pair of triangles. The GPU can only think in triangles. It has no idea what a rectangle is. Poor thing. we have 4 vertices and usually several hundred pixels to draw. If we re-orient the normals, it means doing the same rotation on all those hundreds of pixels. The faster (although somewhat more convoluted) system is to rotate the light itself in the vertex shader. If we’re dealing with a wall, then instead of turning the normals 90 degrees we turn the light 90 degrees in the opposite direction. Then the pixel shader is set up to just treat every single polygon as if it was the floor.

The result:

unearth_normal5.jpg

It actually wasn’t quite this easy. Since I’m used to working with low-tech rendering systems, I’m usually lazy about my texture mapping. In an old game you might have a regular old brick texture. If you mirrored it, it was no big deal. You could even flip the image vertically. It didn’t matter. It’s just bricks. The only time you had to really care about orientation was when you had textures with words on them.

But orientation becomes very important when you’re using normal maps. If the normal map ends up facing the wrong way, then it can end up lit the wrong way. Details that should poke out (like bricks) become indentations.

I had an embarrassing comedy of errors because I forgot about this. I thought I was done, turned around, and saw one wall was inverted horizontally so that the protruding bubbles you see in these screenshots were catching light from the wrong edge. So I messed around with the math and fixed it. Then I turned around again and saw the walls which were previously fine were now inverted. I went around in circles like this for much longer than I care to admit. It finally dawned on me that the normal mapping had been working just fine the first time. I just had the texture mapped the wrong way around on one of my walls.

unearth_normal6.jpg

I felt like an Olympic athlete that won the gold in jumping hurdles and then tripped six times trying to ascend the podium for my medal. At any rate, let’s celebrate by lighting the scene like we’re game developers in 1997!

unearth_normal7.jpg

So it works. We’ve got a nice system that can handle arbitrary scenery and arbitrary lights and it will light, normal-map, and shade everything correctly. It doesn’t need to be a block world. This ought to work on any kind of scenery I throw at it.

The downside is that it’s really shockingly slow. Considering Doom 3 ran just fine on 2004 machines, the stuff I’m doing here is ridiculously slow, dropping all the way down to 30FPS when we should be hitting twice that with lots of time to spare. We’ll take a look at that next time.

 

Footnotes:

[1] A banner year for technology. Both Half-Life 2 and Doom 3 happened that year. While they weren’t the first games to do it, both were really great showcases for normal-mapping.

[2] Also because I used to get them mixed up all the time. Nothing is better at helping you nail down concepts like having to explain them to someone else.

[3] Actually, a full-poly scene in a videogame would be WORSE than the same scene in a Pixar-type movie. In a movie, the author controls the camera and you can cut corners on the stuff that isn’t viewed up close. In a game, the audience controls the camera so EVERYTHING has to be high detail.

[4] It’s actually a pair of triangles. The GPU can only think in triangles. It has no idea what a rectangle is. Poor thing.



From The Archives:
 

40 thoughts on “Project Unearth Part 3: Relief Picture

  1. DaMage says:

    hmmmm…..there must be some big bottleneck if you are only getting 30FPS. Normal Maps dont use too much processing power.

    Is the bottleneck within the normal map section….or is it with the shadow code, but has only dropped low enough now you are doing normal maps?

    Looking really good now though, i’ll be interested to see what you are going to add next.

    1. Shamus says:

      For the record, it’s been low. I doubt normal maps are a part of it. (Spoiler: Actually, I KNOW normal maps weren’t the problem.)

      1. Tom says:

        The real bottleneck here is the pipe where the content slowly drips out :D

  2. Cybron says:

    I love your graphics articles. I’ve never done any graphics stuff and every time you do one I learn so much new stuff. The idea of rotating the light source instead normals blew my mind.

  3. AdmiralCheez says:

    I think this article just solved a long-running mystery I’ve had, where I encountered a weird glitch in Skyrim (not surprising). I woke up from sleeping one night at the Bannered Mare, and had this strange symbol ingrained in my retinas. I had no idea what it was at the time but now I think it might have been an escaped normal map. No clue what it goes to, or why it happened, but I know for certain that it was not part of some divine quest giving me visions.

    1. Felblood says:

      It’s a folded shirt; tilt your head to the left, if you don’t see it…

      This map is used on a number of the worthless changes of clothes you can find, when they are dropped on the ground.

      1. Tuck says:

        Or apparently when they’re left lying over your head while you sleep.

  4. Neko says:

    Glad you sorted out that normal map problem in the end – I’m sure that was a relief.

    1. Paul Spooner says:

      Yeah, and don’t be embosrassed by the setbacks.

      1. bucaneer says:

        Not much of a setback really, just a bump on the road.

        1. Alexi says:

          More like a bump in the wall if you will.

          I guess these goofs are expected and pretty much a given as we learn and practice new techniques and technologies. Given its inevitability, you could say that this was a normal bump in the road…

          [/ducks]

        2. AtomF says:

          It’s a normal aspect of game development.

  5. Daniel says:

    Since this project is ultimately about learning, you should look into parallax mapping. The technique uses a height map to warp the texture depending on the viewing angle. Even though the effect stops working at acute angles, it would probably work in VR without resorting to the “MOAR POLYGONZ!” approach.

    1. Chris says:

      I’d imagine that parallax mapping is subject to the same “stereoscopic vision ruins the illusion” issue that plagues normal maps. The only difference between them is that instead of being an illusion of depth achieved by faking light, it’s an illusion of depth achieved by faking the movement of three dimensional objects. Maybe it would work for small holes and craters, since they can be black on the inside and looking at the wall askew wouldn’t highlight the lack of true 3D? But for bumps I think they’d be generally right out. The last time I even remember this technique being used in games was in the piles of bricks or rocks in Fallout (though perhaps some game devs have gotten proficient enough at using it that I don’t notice it anymore).

      1. Geebs says:

        Given the fact that even good implementations of parallax mapping suffer from “swim” when you get close to them, I’d imagine parallax mapping would be a one way ticket to Pukesville in VR.

        BTW with modern hardware you don’t really pay too much of your budget for doing normal mapping in tangent space. I’ve been using this technique and my current graphics card is so delighted about the savings on bandwidth from not having to pass all of those tangents and bitangents that it barely notices the extra work.

      2. Wikipedia shows two red spheres here http://en.wikipedia.org/wiki/Bump_mapping
        The one on the right is called a isosurface http://en.wikipedia.org/wiki/Isosurface
        It actually modifies the object (rather than using “trick lighting etc.)
        More info http://www.imm.dtu.dk/~janba/gallery/polygonization.html

        I guess this becomes a form of tesselation (although I suspect that tesselation could be applied in addition to this).

        No idea how intensive processingwise isosurfaces would be.

        Perhaps the key is to calculate things not exactly in realtime, aka JIT (Just In Time), *scratches head* Not sure how that would affect framerate stability though (with is important for VR).

      3. Zak McKracken says:

        I’d say that normal mapping plus parallax mapping should hold up longer than just normal mapping, though as soon as some bump would affect the silhouette of the object it’s on, it will be over.
        That said, it still seems like a nice way of handling complicated geometry at a distance. With large spaces, you probably always need a LoD system of sorts, so why not use it for intermediate distances and replace with actual geometry if the camera comes too close?

      4. Zukhramm says:

        What is “true 3d” though? Paralx mapping shows different parts of an object depending on the angle you look at it from, as does using a polygon model. Outside of at the edge of the surface hey should produce the same result at most angles.

        1. The Snide Sniper says:

          It depends on whether you do plain parallax mapping (a cheap effect, but which doesn’t properly handle complicated geometry), or occlusion parallax mapping, which is a full raycast onto a heightmap (and thus is truly 3D).

  6. Paul Spooner says:

    That “result” picture looks pretty swanky.

    So, you talk a lot about how it would take artists forever to make scenes with actual geometry… Any chance you can add geometry parametrically? As a bonus, bake the geometry into the texture on the fly? Then, have the geometry sink into the surface like the grass example you did a few articles ago. That way, you can get the “actual bricks” effect close up with polys, and far away with bump maps. Should work for VR as well, since parallax doesn’t work a long ways off.

    Here’s some parametric brick generating code, if you need some inspiration. http://www.peripheralarbor.com/gallery/v/CG+Art/scripts/AutoMason/

    1. So, there are two sets of normal maps used in real games:

      The first kind is used for adding detail to textures, like the aforementioned brick texture. These have to be created by artists (there are tools like “ShaderMap” which can help with this, by “extracting depth clues” from the image. This is generally all fakery, but when you’re viewing the image in 2D, it works and can be really convincing.

      The second kind is used for adding detail to models. This is what you can see happening with the keyboard and hand in the Doom 3 model. The artist creates a highly detailed model of the person, and a simplified version derived from the same model. They then use a function of the modeling software to extract a normal map – it compares the differences between the detailed model and the simplified one, and generates a normal map texture which “corrects” for this (to a degree).

      Now, “adding additional geometry” – you want displacement maps (which are generally black and white, and are basically just “add height here”. You can derive the normal map from a displacement map (or, you can generate the same effect from a displacement map, it just requires more computation), but the benefit of displacement mapping is that you can also uses it to generate additional geometry. You use a tesselation shader to dynamically tesselate the surface, and then the displacement map to “push and pull” the freshly generated polygons – and all of this is done /by the graphics card as its’ generating the frame/. Which is cool, because you can dynamically vary the number of polygons rendered precisely based upon how far away the object is from the camera.

  7. Erik says:

    Small bug i guess:

    The annnotations (the little [1], [2] etc) dont work when the post is not fully expanded.

    1. McNutcase says:

      Known issue with WordPress. Hover the pointer over them on the front page, and the text will pop up. On individual post pages, the click-to-read is fine.

  8. Mephane says:

    Regarding the issue and VR, shouldn’t it be possible to approximate the original geometry again from just the normal map, and then create the surface geometry of, as in your initial example, a brick wall, during the rendering process? Or are is performance cost of such an approach too prohibitive yet?

      1. Ah, micropolygons, yeah might be a better alternative than isosurfaces.

        Both displacement maps (micropolygons) and isosurfaces (voxels?) causes a form of tesselation or object manipulation, and thus will look correct for both eyes in stereoscopic 3D/VR.

        For those curious, the object in the world is modified as opposed to how the camera sees it (normal maps and bump maps, certain types of shadows etc.)

        I’m not sure if ambient occlusion is also a stereoscopic issue or not.
        I know anti-aliasing “might” be depending on the actual method used.

  9. Piflik says:

    “A perfectly flat tile would be solid blue, since all the normals would point up.”

    Not really blue, since the ‘neutral’ value for red and green is 128. The result is the nice lavender color you see in most normal maps (#8080FF). Also since the blue channel (or z-coordinate of a normalized vector) is really easy to calculate (at least with Tanent-Space Normalmaps, since there ‘up’ is unambiguous), especially on a GPU, this usually doesn’t contain information about the direction of the normal. It either is completely omitted or used as a way to manipulate the intensity of a normal map (since it only stores direction, this can be used to have variable depth in the normal map).

    Regarding the orientation of the UV-Layout: if you are doing Tangent-Space Normal Maps correctly, this doesn’t matter. You can flip or rotate the UVs and the lighting will be correct, since the derivative of the Texture-Koordinates is used to calculate your tangent-space. When you flip or rotate the UVs, the tangent-space will also be flipped or rotated (you can do this in the fragment or vertex shader or even in the geometry shader, if you are using one).

    Your way of using the normal map as a Object-Space normal map and turning the normals (or light direction…same thin, really) before passing a Vertex to the Rasterizer, seems a bit convoluted, albeit easier than wrapping your head around tangent-space for the first time.

  10. Ilseroth says:

    As always, love the tech articles. I recently began messing with normal maps on project of mine and the difference was staggering. Granted, it is in Unity so its not like I wrote the graphical code for it, just build a model and textured it, but I did end up with the orientation issue in places and it turned out that in some places on my original models the texture were mapped up side down.

    That being said, I decided I kinda wanted a more “toony” art style, and the normal mapping actually seemed to hurt that style, but then my art abilities are… less then staggering.

  11. Steve C says:

    While I was reading I assumed I was about to see a .gif. I find it interesting that the two images that look like this appear to shimmer and move when I’m not looking directly at them. Optical illusion I’m guessing. It was only mildly interesting but omg that would be awful in VR. Imagine everything in your peripheral vision shimmering and moving. puke

  12. Simplex says:

    “I've never been stunned by depth of field effects, fullscreen anti-aliasing”

    For me good FSAA makes a huge difference in image quality, especially in movement. You can see how FSAA works in this video:
    http://www.iryoku.com/smaa/downloads/SMAA-Enhanced-Subpixel-Morphological-Antialiasing.mp4

    Source: http://www.iryoku.com/smaa/#downloads

  13. Derektheviking says:

    I remember on the Half-Life 2 Ep 2 commentary someone talked about how they made a shader to generate shadows from the normal map, which was why the caves looked so good under by torchlight.

    Anybody got any ideas how such black magic is possible?

    1. Paul Spooner says:

      Sure! Do a spatial integral on the normal to get surface offset, and then do a per-pixel displacement onto the shadow map. Probably pretty intensive, but after optimization it wouldn’t be much worse than any other pixel shader. I suspect they used a true displacement map to start with though, and then baked the normal map from that. Working backwards with the integral is risky because of the dead-reckoning involved, and could easily produce non-zero loops which would result in discontinuities.
      Not that discontinuities would stand out too bad on a lumpy cave wall.

      1. Derektheviking says:

        Yeah, I would definitely work from the displacement rather than run numerics, especially with the limited resolution of RGB888 values. I guess I was just hoping there might have been a neater trick than yet another texture. But, given how Source is built…

        Thanks for the reply.

        1. Geebs says:

          You can also get steep parallax or relief maps to cast shadows on themselves by ray marching in tangent space from the light direction as well as from the view direction, which is expensive but pretty convincing.

          For examples: http://www.inf.ufrgs.br/~oliveira/RTM.html

  14. TSi says:

    I’m sorry but I didn’t really understand the trick about rotating the light source 90°.
    Isn’t it an omni ? thus no mater the orientation, it will still “shine” in every direction right ?

    It actualy looked light you added an ambient (global ?) light on the fixed screen shot or at least increased the ambient light value.

    1. Piflik says:

      Basic shading is done via dot(n,l), where n is the normal and l is the direction to the light source (both must be normalized for correct results). He doesn’t rotate the light source, but the normalized vector from the surface to the light source (at least that’s how I understood it).

      1. WJS says:

        In other words, imagine rotating the light not about itself, but about the point on the surface we’re calculating?

  15. shahar says:

    In later valve games they introduced a slightly more method of normal mapping – http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_EfficientSelfShadowedRadiosityNormalMapping.pdf

    it requires slightly more pre-processin gin the texture than just determining the normal angle, but gave the surfaces a much more realistic feel with minor surface bumps actually casting believable shadows.

  16. Neil Roy says:

    Very nice. I loved your 1997 comment, gave me a chuckle. It did seem like they went out of their way to use the primary colours for lighting scenes all at once, didn’t it? ;)

  17. Neil Roy says:

    I am also wondering, if it could be possible to have two normal maps for stereo vision, one for the left and one for the right view? It would take more memory, but that doesn’t seem to be as much of a problem these days (nVidia will have 8gigs of memory on their next 880 card).

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Steve C Cancel reply

Your email address will not be published.