Frontier Rebooted Part 2: Welcome to Orientation

By Shamus Posted Sunday May 11, 2014

Filed under: Programming 77 comments

This first part isn’t important to the project. But we’re talking about it anyway basically because I want to.

Obviously in 3D space, the concept of which way is up or forward is completely arbitrary. We’ve got 3 axis, one for each spatial dimension, universally named X, Y, and Z. You can arrange these any way you like. If you want, X can be down, Y can be forward, and Z left. If we’re looking to assign an axis to the directions of left-right, back-forward, up-down, then we can do it six different ways: XYZ, XZY, YZX, YXZ, ZXY, or ZYX.

Furthermore, we can change the orientation of any of these lines, so if we chose XYZ, we could have positive X values go east and negative values west, or we can flip that around and have the axis point the other way. So there are six ways to organize our axis and within each of those there are eight different combinations of which way they point.

In a pure mathematical sense, none of this matters. It’s all arbitrary. Instead of XYZ you can name your axis HUMPERDINK, SNAGGLETOOTH, and CARROTJUICE. They can be in any order and point any way you like. The math will all work out. But from a practical standpoint, we’ve basically settled on some conventions and you shouldn’t break from those unless your plan is to drive people crazy.

In the end, any of these coordinate systems will fall into one of two groups: Right or left-handed systems:

NERD GANG SIGNS. From <a href="http://en.wikipedia.org/wiki/File:3D_Cartesian_Coodinate_Handedness.jpg">Wikipedia</a>. Relevant article here: <a href="http://en.wikipedia.org/wiki/Cartesian_coordinate_system#Orientation_and_handedness" title="Cartesian coordinate system">Cartesian coordinate system</a>.
NERD GANG SIGNS. From Wikipedia. Relevant article here: Cartesian coordinate system.

Take a right-handed coordinate system and flip one axis, and you have a left-handed system. Flip another, and you’re back to a right-handed one.

Fine so far? No? Sorry. I tried.

By far the most popular coordinate system (or at least, the one I’ve encountered the most) is one where X points east, Y points up, and Z points south. This is a right-handed system. This is used by Oculus, all id Software games, and my former employer, and I’m sure a lot of other games out there. We’ll call this system Doom-space. Both Unity and Unreal Engine use left-handed systems, although I can’t remember how they arrange their axis off the top of my head. Personally, I’ve favored a system where X points east, Y points north, and Z points up. This works out to be a left-handed system. We’ll call this system Shamus-space, since in Euclidean geometry my ego is unbounded.

For the past few years I’ve favored Shamus-space because to move from 3D to 2D (let’s say we want to depict where the player is on an overhead world map) all you need to do is throw away the Z value. If you’re using Doom-space, then to make the transition you throw away Y, invert Z, and re-assign it to YThis is assuming you use the default OpenGL 2D mapping where X runs left-to-right and Y runs bottom-to-top, which is the ACTUAL source of all this chaos.. And that’s a lot more cumbersome and prone to mistakes. On the other handLiterally!, OpenGL defaults to a right-handed system, so to use a left-handed system you’ve gotta flip an axis. So it always feels like you’re at odds with the underlying system.

Sticking to my unconventional system has its costs. Whenever I check out code snippets or example programs from other coders, I always have to juggle everything around to make it work in Shamus-space, since Doom-space is so much more commonIf not more common in practice, then at least more common among the types of hobby-coders who share their work online.. In the long run, I think this cost is probably more severe than the occasional annoyance of not having it feel intuitive to me. So for this project we’re going to use Doom-space.

Also, I am sort of planning ahead. I don’t have an Oculus yet, but I plan to get one someday and I want to be able to re-use this code when that happens. Oculus provides you with all of the headset position and rotation info Doom-space, and I do NOT want to have to constantly convert between the two. Eugh.

EDIT: And it looks like I inverted the north/south axis in my above description. I’m not going to fix it, because this is a great example of the kind of confusion I keep running into.

Anyway. Let’s get this started. I don’t want to belabor the first steps of setting up heightmap terrain. I’ve already done three projects that involves heightmaps. Let’s skip the heightmap stuff and get to the shader work.

In the old days, we would begin with a flat grid:

Taken from <a href="?p=141">this ancient post</a>.
Taken from this ancient post.

And then we would lift the points up to create hills:

terrain2.jpg

That hill-building would be done by the CPU. We’d build all these polygons and then send them off to be rendered. But here we can skip that step and just dump all that work onto the GPUGraphics Processing Unit. Literally: Your graphics card. Don’t let the name mislead you. It’s not a single processor, but many. (Many many.) To do that we use a shader, which is a program that runs on your graphics card instead of on your CPU like all your other software. We create a shader, compile it, and then send it over to the graphics card to be used. That program will control how polygons are rendered, and can do all sorts of nifty things without troubling our poor overworked CPU.

When we’re using shaders we just render the original flat plane, and provide the shader with an extra bit of info: A texture image like this one:

freboot4.jpg

We use the color values as elevation. So, basically we’re looking at a map of the world, and lighter = higher. The white spots will be mountain tops and black spots the low points. This is just to get us going. Eventually we’ll generate our terrain procedurally, but for now this is a quick way to get some polygons to work with. So as the flat plane is being rendered, the shader is looking at this texture, pulling out a color value, and quickly lifting up the vertex before proceeding.

Just so I can see what I’m doing, I have it color the terrain based on height. This forms some arbitrary strata. Again, this is just to get us going so we’re not looking at a field of solid color.

freboot2.jpg

You’ll note that the world is flat-shaded. There’s no shading, no shadowing, nothing to give us a sense of contour. If not for my half-assed coloring, the terrain would be a single flat color. This is because, for the purposes of lighting, it’s still rendering a perfectly flat plane. We have the information to deform the plane to make hills, but we don’t have the information to know the angle of any particular point on the surface when we’re drawing it. If we don’t know the angle, then we don’t know how light will interact with it, which means we can’t shade it. For that we need a normal map.

We’ll do that next time.

 

Footnotes:

[1] This is assuming you use the default OpenGL 2D mapping where X runs left-to-right and Y runs bottom-to-top, which is the ACTUAL source of all this chaos.

[2] Literally!

[3] If not more common in practice, then at least more common among the types of hobby-coders who share their work online.

[4] Graphics Processing Unit. Literally: Your graphics card. Don’t let the name mislead you. It’s not a single processor, but many. (Many many.)



From The Archives:
 

77 thoughts on “Frontier Rebooted Part 2: Welcome to Orientation

  1. sensi277 says:

    Isn’t it funny how all programming eventually loops back and starts from the beginning again? It’s like that whenever you start a new project, having to re-write the simple code that makes the program run. And every new language you learn will have you printing “Hello world!” once more. Life is like that. Old things come back and change into new things, and everything you thought was in the past will always be part of the future.

    Whoa, that’s deep :3

    1. morpork says:

      Your words may be deep z-axis wise but how can we tell if it DOES NOT HAVE A NORMAL MAP? Checkmate.

  2. Rflrob says:

    “Personally, I've favored a system where X points east, Y points north, and Z points up. This works out to be a left-handed system.”

    Isn’t that a right handed system too? It seems to me that it’s the same as doom space rotated 90° about X.

    1. psivamp says:

      That’s what I was thinking — having extensive experience with right-handed rules from being an electrician (apparently there are also heretical electricians who use left-handed rules) — I immediately started mapping it out on my hand, and it seems to be a right-handed rule. Not only a right-handed rule, but the second-most common set I’ve seen after the first right-handed rule mentioned.

      1. Bryan says:

        First arg to the cross product is the first finger, second arg is the second finger, and the result is the thumb. (And z=x-cross-y.)

        So that’s a right handed system, yeah. So is Doom-space. Not sure on Unreal and Unity spaces, though.

      2. NA says:

        You are right, but if you read on you’ll find that the typo must have been that “y points SOUTH”, not north – and that would make it a left-handed system.

        This part here: “because to move from 3D to 2D (let's say we want to depict where the player is on an overhead world map) all you need to do is throw away the Z value”. If you have your origin at the top left corner, then that means that y points south. Of course some systems (e.g. OpenGL) have their origin in the bottom left corner, so who knows!?

    2. Decius says:

      Yeah, my thumb points east, index finger north, and second finger up.

      North-east-up would be left-handed.

      1. SteveDJ says:

        But if I turn my desk around, my thumb now points west… :D

        […runs and hides]

    3. BlackFox says:

      Agreed–the hand position then would be palm up, thumb facing right (pointer forward/away from you, rest of hand up). Admittedly I am not the strongest on east vs west though. Since Shamus seems to have been having to reverse for years, I suspect he just derped up east and west as well, and meant X positive is west.

      1. Paul Spooner says:

        Yep! Sun rises in the west? Sounds about right! Either that, or north and south… or… up and down?
        EDIT: Aaaaaand lampshaded four paragraphs later.

    4. silver Harloe says:

      Map X+ to your thumb
      Map Y+ to your index finger

      bend your middle finger so it’s normal to the plane defined by your thumb and index finger.

      Now, in order for X+ to be your thumb and “pointed east”, your thumb has to point to the right. If you point both your thumbs to the right, then on your right hand, the middle finger is pointed at you, on the the left, the middle finger is pointed away (the palm of your right hand faces you, the palm of your left hand faces away) – this is dictated entirely by “X+ = thumb and to the right”

      1. silver Harloe says:

        In math class, I always taught to think of (0,0) [ (x,y) ] as the center, with + going right on X and up on Y. This is the basis of the XY/left|right debate.

        The contention for the “other side” is that in *screen coordinates*, (0,0) is the top left, so X+ goes right still, but Y+ goes DOWN (from the top to the bottom of the screen)

        This is equivalent to bending your left hand so the thumb still points to your right, but the index finger points down, and the middle finger points towards yourself.

        1. Erik says:

          And that’s exactly the source of the difference. If the initial engine designer was mostly a maps guy, or some types of math guy, you get a Shamus-space left-handed engine. If the initial designer was a graphics or image processing guy, like Carmack of id, you get your Doom-space right-handed engine.

          Since it’s an arbitrary decision, it all ends up at whatever made sense to the guy who wrote the initial prototype. After that, no one will ever take the hassle of rewriting all the code in order to change it. :)

  3. Bryan says:

    HUMPERDINK HUMPERDINK HUMPERDINK!

    Have fun storming the castle!

    Er, uh… yeah.

    1. Paul Spooner says:

      Please don’t label all three axies using the same variable name. This gets even worse when coding in a Unicode compliant environment which doesn’t normalize code handles… where such a thing will actually compile… The nightmare… (shudder)

      1. Zukhramm says:

        To be as general as possible, the axes should clearly be named x with subscripts 1 to n for n-dimensional space.

      2. Trix2000 says:

        Obviously the correct solution would be to change the case – something like HUMPERDINK, HuMpErDiNk, humperDINK. That is, assuming the language differentiates variables by case.

        …I’m a terrible person.

        1. Bryan says:

          No, no. You use Cyrillic letters, which look exactly the same as various Latin letters, but which have different Unicode code points. Obviously. :-P

          (I *might* have been doing too much Go programming lately, which is exactly how Paul Spooner described: identifiers are UTF-8, although I am *pretty* sure they normalize. But Cyrillic letters are different from Latin ones, so they don’t normalize to the same thing, so there are still 3 identifiers. This Cyrillic-alphabet thing, and the impossibility of typing many of these identifiers’ letters on a US-English keyboard, is making me wonder if that UTF-8 identifier thing was a good idea… but nothing I’ve seen yet has exploited it.)

  4. MichaelGC says:

    We'll call this system Shamus-space, since in Euclidean geometry my ego is unbounded.

    Classic! (And relativity means that the same goes for me! Thus, Einstein inadvertently pre-explained approximately 87% of all internet arguments.)

  5. Nick-B says:

    It’s always annoyed me that Minecraft does not appear to follow a sane coordinate system when you display the coords. To me, Z is ALWAYS up (being the third dimension) and numbers should increment based on looking north, where it gets bigger as you go right or away from yourself.

    EDIT: It does create a problem when you try to set up a 2D map with coords based on the system though. For instance, in a map where the coordinate is 5, 10 I’d expect the origin to be the top left, not the bottom left. Doesn’t make sense, of course, when looking at a graph (origin is bottom left for 2 positive coordinates).

    1. Bloodsquirrel says:

      Y being up makes much more sense when you look at it from a graphics perspective:

      If you’ve got a 2D screen, then X is horizontal and Y is vertical. Ultimately, all of your 3D geometry is going to be mapped to a 2D screen where Y is up. Using Z as up would require some additional logic built in to your transformation matrix.

      In 2D, I’d just favor mapping everything to the XY plane as it comes in, but if you’re doing 3D keep Z as up makes sense.

      1. Uristqwerty says:

        In my opinion, Y being up makes the least sense from a player perspective. Generally, controls are set up as a WASD-plane plus a jump-axis, where the WASD-plane is the world’s horizontal plane plus a rotation which varies during gameplay.

        As a result, either the coordinates are not presented in alphabetical order, or the pair relating to the WASD-plane will be listed on either side of the unrelated vertical axis.

        Although, this is more of an issue about games making raw coordinates visible to the player at all, they could simply label them “height”, “north”, and “west”, and continue to use whichever vertical axis they prefer for the actual game engine and art. As a bonus, it would make it a bit easier to remember which direction you are facing, if there is a visible sun to constantly mark the east-west axis. And reduce the difficulty of long-distance coordination between new Minecraft players…

        1. Jack Kucan says:

          Whenever I’m working on my own engines (which never go anywhere) I tend to think of them from the perspective of an RTS, but in 3D, meaning that I always have X and Y map to WASD controls and Z map to jumping controls. I always got confused by Y being up and Z which was equivalent in my view to the “actual” Y going North positively rather than South.

      2. silver Harloe says:

        “Y is up” is how math defines it, with an axis in the “lower left”.

        Graphics, *at the most basic level* (i.e. there may be libraries which handle this transition for you) are oriented with “Y is down” because they put the axis in the Upper Left (which makes sense given how CRTs used to work)

        1. Zukhramm says:

          Math doesn’t really define them as anything related to direction, that’s just a convention of drawing, and in 3D, drawing Z as up is fairly common as well.

          1. Tse says:

            Yep, I’ve used Revit, AutoCad, ArchiCad, 3DSMax, Unreal Engine 4, SketchUp… Z is up for each one of ’em.

            1. Richard says:

              Although Sketchup calls up “Blue”, which rather proves the point.

    2. Cuthalion says:

      I always think of X and Y as “along the ground”, with Z meaning “into the air”, in the context of a game.

      Where this gets weird — and the subject of much of the discussion so far — is when you ask what type of game?

      Because “up” means different things.

      In a first-person game, this means X is left-right, Y is forward, Z is jump. On the screen, X is left-right, Y is into the screen, and Z is up.

      (Some heretics (which are apparently the majority?) prefer X being left-right, Y being jump, and Z being forward, making the screen and character-perspective Y be the same thing. X on the screen is left-right, Y is up, and Z is into the screen. This is clearly and objectively wrong, but I guess it would simplify sidescrollers.)

      In a strictly-top-down game, X is left-right, Y is up, and Z is “above” to “below” (or “below” to “above”) in game space and foreground-background (or background-foreground) in drawing terms.

      In a mostly-top-down game, X is left-right, Y is up, and Z is foreground-background (or background-foreground), up (for jumping, adding to Y), or both. But “up” from the character’s perspective shares the Y axis with northward, which is “up” from the player’s perspective.

      In a sidescroller, X is left-right, Y is up, and Z is foreground-background (or background-foreground).

      In an isometric game, all is madness.

  6. Neil Roy says:

    Sitting here trying to figure out how to hold my hand now. You’ll excuse me if “up” and “north” mean the same direction to me ;)

    So which system has X horizontal (right to left on my screen), Y vertical (up and down my screen, top to bottom) and Z into my screen (depth)? That’s how I always looked at it anyhow.

    1. Sleepyfoo says:

      Left Handed.

    2. Halceon says:

      Right Handed

    3. swenson says:

      Well, looks like all of the traditional bases have been covered, so I’ll pitch in with Gripping Handed.

      1. DivFord says:

        Nicely done.
        Since the gripping hand only has three fingers, it’s clearly the right choice…

        1. evileeyore says:

          Except the gripping hand is sinister!

      2. Neil Roy says:

        LMAO… love the responses! :D

        Well, I checked out a link Shamus provided and I think I have it figured out now. The axis used for programming, right handed I take it, X points right, Z points up and Y points into the screen… which seems screwy to me as it feels like 2D co-ordinates with X pointing right, Y pointing up (towards the top of the screen) and Z going into the screen makes more sense, I don’t know who decided to change this, but they should be slapped around a little. ;)

        Anyhow, this should explain some of my confusion when playing around with 3D. I did manage to get terrain rendered with water and trees etc… using OpenGL. This shader language scares me. But such is progress…

        Oh, and yeah, I also figured out normals, which are fairly easy to calculate. You can calculate normals for each vertex for smoother looking hills or for the polygon for flat surfaces.

        My main problem in 3D is with organizing the actual data. I really dislike how most tutorials online show you all this code to create polygons in your program on the fly, and after you use that knowledge to construct a lovely world with terrain and trees etc… you realize, you have to start over because you should have stored the vertex and polygon data in some sort of array so you could use it to optimize, collision etc. Yet I see very little discussion on organizing your data itself, which I feel is most important to start out with, before you render a single polygon.

    4. Retsam says:

      Since all the joke answers, have been taken, it would depend on specifically what directions X and Y point. Does X point left or right, does Y point up and down.

      Though based on your parentheticals (X points left, Y points down) I think that’s left-handed.

  7. I am SERIOUSLY stoked for the next chapter. My 3d animation education was timed so that normal maps were JUST starting to come out AFTER I’d graduate, and we were never taught how they worked.

  8. Paul Spooner says:

    Very interesting! I always assumed you got normals for free when dealing with faces. I guess not? I’m interested to know how much render time you save by not computing normals though… can’t be that much, just one cross-product per triangle. Doesn’t the card do that automatically? Did you turn off normal computation on purpose?

    1. Zukhramm says:

      The card doesn’t compute them for you, and even if it did, they probably wouldn’t be what you wanted since normally you take the average multiple normals at each vertex for smooth shading.

      1. Neil Roy says:

        Yeah, you might want smooth shading, in which case you calculate normals for each vertex, or you may want flat shading (say for a 3D model like a car or house) in which case you want to calculate the normal of the polygon, or some combination of the two. It’s not difficult to do though.

    2. Richard says:

      Normals never come for free, because a given vertex could have any normal direction. Normals effectively define the rate-of-change of the surface angle.

      Eg, compare the following 2D ASCII-Art lines:
      “——-” : all the normals pointing straight up.
      “/-\_/-\” : has a different normal at each vertex.

      While both lines contain several apparently-identical “-” segments (triangles), the segments have different normals because it depends on the segment-next-door.

      They can of course be calculated from the heightmap, but the choice of algorithm is what shapes the normals.

    3. Geebs says:

      Yeah, you need to supply your own normals. For added pain, when you’re using a texture to transform a plane into a heightmap, you will find that in some cases it’s cheaper to pre-compute the normals and supply them in the rgb channels of the height texture, and in others, it is cheaper to just compute them directly in the shader.

      1. Bryan says:

        Isn’t this more or less what the first-derivative GLSL extension was for? Computing a normal map from the texture “ought” to be just figuring out the dz/dx and dz/dy (…in this case, z is the normal map value), and … uh… hmm. Averaging them I guess? Something like that.

        I guess it depends on how you want “corners” to look, although with an actual heightmap I’m not sure there is such a thing as a corner anymore.

        1. Geebs says:

          dFdx and dFdy are only available in the fragment shader, so you can’t use them to calculate the normals in the vertex shader. Getting precomputed normals directly from a texture in the fragment shader has actually worked out slower for me in the past than just doing four texture lookups in the vertex shader for -y, +y, -x, +x and getting the cross product of the two vectors.

          Come to think of it, I don’t think I’ve ever tried using dFdx/dFdy on the height map texture in the fragment shader, but that’s partly due to using shader techniques which want a fragment normal interpolated from the vertex normals.

          1. Bryan says:

            Hmm. OK, in that case, I have *no* idea what those extensions are useful for. :-) I naively assumed (without ever having used them, only taken a cursory look at the docs) that they were available in anywhere that a tex2D was available.

            Four texture lookups might well be faster; depends on how well the shader compiles to parallelizable gpu machine code I suppose. Never actually tried it…

            1. Geebs says:

              If you have a dependent texture lookup in the fragment shader (e.g. if you look at the texture conditionally depending on a read from a different texture) which is therefore in nonuniform flow control (basically your fragment might execute a different code path from the next fragment along), then you need to do the lookup as a textureGrad, which will need the results of dFdx and dFdy.

              This is one of those moments in GLSL where, while the results of not doing this the correct way are technically undefined, the actual output on screen usually looks exactly like what you were expecting, except when it doesn’t :-p

    4. Kian says:

      You are partly right. OpenGL computes which side of a triangle is the “front” and which is the “back” depending on the order in which you provide the vertices. If they go counterclockwise, you’re looking at the front. This is essentially doing a right-handed cross product and taking the resulting vector to indicate the front.

      What you are not considering is that the cross product of the two sides of a triangle would provide a vector that is perpendicular to the surface, but not one that is of unit length. This is important, because a normal must be of unit length, otherwise it’s not a normal. This is a minor quibble, though, since the vertex shader could be coded to produce the normal.

      However, while flat shading only uses one normal per triangle, more elaborate shading requires one normal per vertex. These normals are calculated with the vertices from neighboring triangles, but the vertex shader can’t access the information of neighboring triangles.

      So, Shamus could code his shader to produce flat shading with just the heighmap (the kind of shading where each triangle is of a single solid color), but for smooth shading he needs a normal map.

  9. urs says:

    Up North What? ;)

    X+ goes to the right. Y+ goes up. Z- breaks through the screen and stabs me in the chest. But, yeah. There’s a workInProgress on my harddrive where the groundwork of flying over perlin-generated landscape was made by someone else. Headache.

    1. Neil Roy says:

      That was always my assumption as well. But apparently all this right handed, left handed stuff has Y going into the screen (depth), X going right (or left) and Z going up, so basically Y and Z are swapped and X goes either right or left depending on the handedness….

      Personally, I always thought it was like you described as well, it makes more sense, and I think that is what Shamus was talking about his own preferred method “Shamus-space” was.

  10. Niriel says:

    Shamus-space has an other advantage: it is the default space that Blender uses.

  11. Nick Powell says:

    Do a barrel roll! (Press CARROTJUICE or HUMPERDINK twice)

  12. D-Frame says:

    Nerd gang signs… so hilarious… can’t breathe…

  13. RandomInternetCommenter says:

    Shamus-space really is the most sensible way to represent this, at least in non-programming contexts. I don’t get the talk about Y up being inspired by math; the way I’ve learned it and always seen it explained, in 2D you draw X as east and Y as north, and when you add the third dimension that original 2D plane becomes a flat sheet of paper at the base and the new axis comes out of it, hence Z is up.

    1. Shamus says:

      But…

      Sigh. I hate English sometimes.

      1. Lalaland says:

        English: why have grammar rules when you can just rob random words from other languages?

        As a young lad from Ireland we had to learn our national language Irish (aka Irish Gaelic as there are multiple variants) and bemoaned how ‘hard’ it was. It was only upon trying to explain how some English words were conjugated that I realised that Irish was the simple language and English the awkward mess made simple only by virtue of having grown up with it. Irish for example has a rule for conjugating verbs with 13 exceptions, that’s it everything else follows the rules. This has made the language almost more popular outside of Ireland than within leading to a lovely little short film called Yu Ming is Ainm Dom (My name is Yu Ming)

        1. Zukhramm says:

          Taking words is fine, what makes English annoying is the tendency to take the grammar as well. The sensible thing would be to take the word and shove the standard English grammar on it, i.e. axis -> axises, but nope, we’re sticking to the Latin forms instead!

        2. Tizzy says:

          My favorite joke is: “English is easy to learn.” The looseness of the rules allow you to *speak* English pretty fast. On the other hand, *understanding* English and its multiple variants takes a lot longer. If you meet an unknown word in a sentence, you will have no clue as to what its grammatical function is. Good luck even understanding the sentence structure in these conditions.

          1. Mephane says:

            What I find difficult about the English language is mostly the inconsistency of pronunciation. For example, why is “enough” spoken roughly as if it were written “eenuff”? That is just plain weird. A lot of people say the various tenses are hard, but once you have become used to it, they usually come just naturally (e.g. simple past vs. past perfect).

            For comparison, I learned French for 3 years in school and forgot almost everything, vocabulary, grammar rules, whatever. But I am still capable of reading aloud random French sentences quite comfortably because French pronunciation, once learned, is very consistent (the grammar however, is not).

            Good thing about English is that articles are always neutral. Coming from German, where we have articles for each male, female and neutral nouns, with none of them mapping to things in any logical way; for example, in German the sun is female and the moon is male. A chair is male and a house is neutral. A lion is male, a rabbit is neutral, a cat female*.

            In English this is simple, articles are always neutral, pronouns always apply according to what the noun actually is, therefore, inanimate objects are almost exclusively neutral.

            (*For a lot of animals, there are also gender-specific forms which at least follow the actual gender. While the cat as a species is female – “die Katze”, there is also the specific noun for a male cat, “der Kater”, therefore the male article.)

      2. Bloodsquirrel says:

        I’d like to axe some questions about about how your code works, if you have time…

      3. Trix2000 says:

        I like how the picture even has three of them.

        Would that make the top one your x-axe, the middle the y-axe, and the bottom z-axe?

      4. Neil Roy says:

        LMAO! :D

      5. Unbeliever says:

        …Are these axes color-coded by electrical resistance?!?

  14. MichaelG says:

    Yeah, it’s X to the right, Y is up (like a graph from high school). Then Z is the distance, so it increases as you go into the screen.

    But I did once implement a whole system with X coordinates reversed from what I thought they were. My ability to visualize in three dimensions is so poor that it’s all trial and error for me.

  15. MadTinkerer says:

    True fact: in GameMaker, I once tried to do a “3D” (Doomlike 3D, not real 3D) game where you had your main character view in 3D, but you could also look at a 2D Ultima Underworld style map which was the exact same area with 3D functions toggled.

    Turns out that standard 2D GM view is Right Handed (with z usually being used for layering sprites or parallax scrolling), and 3D projection is Left Handed, which means that the default “D3D_” view in GM is upside-down compared to the 2D “overhead” view. Doesn’t matter for 99.99% of things, but it ruined my clever idea.

    There’s almost certainly several ways to fix it in GM Studio, but I wasn’t using Studio at the time. I might go back to it someday.

  16. Maybe whoever did those other systems was thinking more in terms of 2d being side-scroll-ish things rather than top-down map views. Then to make things that kind of 2d their systems would just have to throw away Z, whereas Shamus-space would have to do other stuff, no?

    1. TSi says:

      I wonder why Carmack decided to use this orientation. Maybe it’s like you say, they were used to side scrollers and simply added a “depth axis”.

  17. Retsam says:

    Would this be a candidate for some abstraction layer? Some #define SHAMUS_SPACE that allows you to code in the coordinate system that makes sense to you, but converts it to the more standard coordinate space?

    Also, obligatory xkcd link: http://xkcd.com/199/
    I actually like the book rule, because my problem with the right-hand rule is that I can’t remember whether thumb or index finger points towards the X, whereas there’s an inherent ordering to the covers of a book.

    1. Zukhramm says:

      It should really be just one additional matrix multiplication when calculating the projection matrix, which you’d only need to do once unless you’re changing FOV. And some sign flips in the vector library if you’re using a left handed system, which you shouldn’t do because you’ll need some sign flips in the vector library.

      Oh, and the only rule for determining handedness that ever seemed to not be harder than just memorizing the directions is the screwdriver rule.

    2. Trix2000 says:

      But the book rule misses out on being able to watch people make weird gestures with their hands!

  18. Tse says:

    I’m confused. Don’t you throw out y and keep x and z when making a side-scroller?

    1. Felblood says:

      Nope!

      Z buffering is drawing sprites that are closer to the camera last, so they overwrite the sprites behind them. (Yes, very oversimplified.)

      If we try to write a Z Buffer using the Y axis, it will cause head explosion in anyone who tries to read or debug our code.

      1. Tse says:

        Oh, so it’s just what is expected and what people find familiar? Yeah, I understand how looking for help online and having to relabel an axis or two every time could lead to a few errors in the code.

  19. allfreight says:

    It is a common misconception that humans cannot handle 4 dimensional space (thus making relativity difficult). The truth is that the puny human mind cannot even handle 3D space. I am willing to concede that there might be some people out there that understand 2 dimensions. On a good day, if they are really smart.

  20. Cuthalion says:

    To make this worse, you can make an isometric game. I am doing this. (Fortunately, the coordinate system shenanigans are basically done.)

    Tiles are diamond-shaped, twice as wide as they are tall, faking 3d without faking perspective… much.

    X is horizontal relative to the screen
    Y is vertical relative to the screen
    Z is added to Y

    Unless you’re talking tile coordinates, instead of pixel coordinates:
    X goes down-right relative to the screen at a slope of -0.5 if I did my math right.
    Y goes down-left relative to the screen at a slope of 0.5 if I did my math right, but you’re traveling backwards along the slope if… er… did I mention the slope is screen-relative here?
    Z indicates altitude, which means it ends up going upward relative to the screen and thus getting added to Y when you convert it to pixel space for rendering.

    Unless you’d rather have pixel-Y point down on the screen, drawing-on-a-CRT style, instead of up on the screen, drawing-on-graph-paper style? I think I had to flip the projection in OpenGL to make Y+ point up, and… um…

    Burn it all to the ground.

  21. Well, iD has always(?) used a coordinate system where Z is up so the maps use X and Y as the floor and Z the height above the floor, this is the world coordinate system. If you are an entity, say a player, you woudl look in the X direction, Y direction would be your left and Z, once again would be your up.

    The old engine at Starbreeze which was heavily influenced by iD thinking and coding as well as the engine I use at Machine games (which IS iD Tech5) both used this coordinate system.

  22. Roxor says:

    You can’t always get away from coordinate conversions. If one system needs one set of axes and another needs a different one, you’ll have to do conversions.

    You want a confusing coordinate system? Try Ambisonics. It uses X points forwards, Y points left, and Z points up.

    Sounds like madness, right? Well, there is a method to the madness: it was originally formulated to use azimuth and elevation angles, and by making 0 degrees azimuth point directly ahead, the odd-sounding coordinate system resulted simply from the mathematics.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Halceon Cancel reply

Your email address will not be published.