Coding Style Part 3

By Shamus Posted Monday Jan 21, 2013

Filed under: Programming 123 comments

splash_frustrated.jpg

And so we continue discussing the Office document that describes the internal coding conventions of id Software. Why? Because I hate having readers and I’m trying to bore you into going away. So far this plan has backfired spectacularly. You people are just as strange as I am. Let us revel in it.

As before, the style guide is in bold, and everything else is me blathering.

Use precision specification for floating point values unless there is an explicit need for a double.

1
2
3
4
5
6
//use
float f = 0.5f;
 
//Instead of
 
float f = 0.5;

Computers are finite in their ability to manage precision. You can’t just give a program a a number like (say) 1050+5. (A one with 49 zeroes after it, then a five.) There are limits to how large your numbers can get or how tiny your decimals can be. And if you’re trying to store large numbers and keep track of really fine decimals, then you may find yourself running into these limits quickly. Of course, you can always use more memory to store numbers, but these, uh… take more memory. Also, it’s much slower to perform arithmetic on more precise numbers.

In C / C++, we have two built in types: float and double. A float has (I think) something like seven digits of significance. (It’s complicated, but you probably guessed that already.) A double uses twice as much memory, but can store larger and more precise numbers. If you need astronomically large or precise numbers beyond what double can give you, then you probably have a task that’s better suited to some other language.

When you specify a number like “10.2”, that’s called a literal. When you type a literal, you have to tell the compiler if you’re giving it a float or double value. If you stick an f on the end, it’s a float. If not, it’ll be treated as a double.

So the rule above means basically, “put f on the end of all your literal floats”. The thing is, every compiler I’ve ever used will give you a warning if you’re not rigorous about keeping your floats and doubles straight. This code:

1
float number = 10.0;

…should ALWAYS trigger a warning. The compiler will feel the need to remind you, “Hey man, I’m just assuming that the 10.0 is a double value. You know, since you didn’t put an f on the end. But see, if it’s a double, then storing it in a float might lose some of the precision. Well, not in this case. But in some other case it might and that seems bad. Anyway, I’m just letting you know. You don’t have to change it. But I’ll bring this up every time you compile if you don’t.”

(Note that your compiler might give slightly different messages if you don’t use the -passiveagressive switch.)

This particular rule is odd because it’s not just about personal style, readability, or aesthetics. This particular rule needs to be followed if you don’t want a bunch of annoying warning flags every time you compile. This one is so important I’m surprised it was listed at all.

Function names start with an upper case, and in multi-word function names each word starts with an upper case:

void ThisFunctionDoesSomething( void );

This has been the most popular style for a long, long time. Although, at my former day job our internal style was a bit of a renegade in that we still used lowercase mixed with underscores. So the above function would be called this_function_does_something (). Outlandish!

I don’t really use lowercase function names in my own work. Well, sort of. Okay, see…

Programmer talk:

In the past, I’ve worked on projects where it was considered very important to differentiate between module functions and local functions. If I’m working in Render.cpp, then all of the functions declared in Render.h would follow the format of Render_____ (). If the function is NOT in the header file, then it was to be lowercase & underscore style, and declared as static. So…

static void draw_hud ()
{
}
 
void RenderScene ()
{
}

…are fine, but…

//This doesn't appear in the header file, and thus shouldn't begin with "render".
void render_hud ()
{
}
 
//This is in the header file, but the name is illegal 
//because it should be "RenderSpeed", Not Rendering.
void RenderingSpeed ()
{
 
}

…would both earn you either a grumpy email or a polite admonishment, depending on who found it.

The thing is: I still use this system today, and it wasn’t until I sat down to write this that I began to question why. I mean, I follow it because it’s familiar, but why was this rule made in the first place? I suppose declaring it as static lets you make new functions without worrying about name collisions. And I can see the value in clearly distinguishing between externally available functions and local ones. But this doesn’t seem to come up in other style guides and I wonder if this style was borrowed from elsewhere.

Anyway, it’s generally a good idea to capitalize your function names. The compiler doesn’t care, but using lowercase names is just really anachronistic and makes people think you’re a wrinkly old greybeard.

The standard header for functions is:

/*
====================
FunctionName
 
  Description
====================
*/

Gah! Why do you squander two precious lines of vertical space this way!?! Repent unbeliever!

I kid.

The thing we’re looking at here is a comment. In C, a slash followed by an asterisk /* begins a comment. Everything after that is ignored until an asterisk followed by a slash, like so: */ Between those two bookends you can put any dang thing you like. Draw diagrams in ascii art, type profane curses to third-party developers, snark at your fellow coders, or whatever else needed to explain why the following code works the way it does.

in my own projects, I use:

/*====================
  Description
====================*/

So now we come to the thorny topic of comments.

The point of these headers is so that another coder can see the breaks between sections of code quickly as they scroll down, looking for whatever the problem of the day is. In any reasonable work environment the comments will be shown in a different color. If you make them reach across the screen, then they can do double duty: They serve as stark visual breaks, and if you stop to read them you can see a synopsis of what this code is for.

I’m noticing that there’s this trend among the younger set to eschew comments, perhaps even going so far as to suggest that using comments is indicative of some sort of design flaw. While I usually only complain about style in a half-joking, I-know-this-doesn’t-really-matter kind of way, this one is really important and I would encourage you to try to view your code through the eyes of a newcomer. Is it obvious what’s going on? No? ADD COMMENTS.

I’m moving away from generic example code for this. The following is a real block of code that’s used to calculate surface normals on a mesh with seams. Say you’ve got a sphere. It’s actually two hemispheres put together. You want the lighting to shade smoothly over the surface, even where the two halves join.

codingstyle2.jpg

Here is an excerpt of the function to do this. It’s long. Sorry, but it’s part of the point I’m making. Also note that double slashes are used for single-line comments. This is different from above, where you can make block comments using /* and */ as bookends.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
  //For each triangle...
  for (i = 0; i < Triangles (); i++) {
    index = i * 3;
    i0 = merge_index[_index[index]];
    i1 = merge_index[_index[index + 1]];
    i2 = merge_index[_index[index + 2]];
    // Convert the 3 edges of the polygon into vectors
    edge[0] = verts_merged[i0] - verts_merged[i1];
    edge[1] = verts_merged[i1] - verts_merged[i2];
    edge[2] = verts_merged[i2] - verts_merged[i0];
    // normalize the vectors
    edge[0].normalize ();
    edge[1].normalize ();
    edge[2].normalize ();
    // now get the normal from the cross product of any two of the edge vectors
    normal = GLcross (edge[2], edge[0] * -1);
    normal.normalize ();
    //calculate the 3 internal angles of this triangle.
    dot = GLdot (edge[2], edge[0]);
    angle[0] = acos(-dot);
    if (isnan (angle[0]))
      continue;
    angle[1] = acos(-GLdot (edge[0], edge[1]));
    if (isnan (angle[1]))
      continue;
    angle[2] = (float)PI - (angle[0] + angle[1]);
    //Now weight each normal by the size of the angle so that the triangle
    //with the largest angle at that vertex has the most influence over the
    //direction of the normal.
    normals_merged[i0] += normal * angle[0];
    normals_merged[i1] += normal * angle[1];
    normals_merged[i2] += normal * angle[2];
  }
  //Re-normalize. Done.
  for (i = 0; i < _normal.size (); i++) {
    _normal[i] = normals_merged[merge_index[i]];
    //_normal[i].z *= NORMAL_SCALING;
    _normal[i].normalize ();
  }

We actually have a really good example of dumb, useless comments here. Line 11 is a hilariously pointless comment, and the worst thing about it is that I’m pretty sure it’s mine. (The one on line 1 is pretty dumb too, although that one should be expanded & clarified rather than removed.)

But the rest of these comments are crucial. Without those, this would be a wall of math. Sure, someone who already knows how to create curface normals will be able to follow the steps, but for anyone else this will be an exercise in reverse-engineering. That takes time.

The “comments are a design flaw” school of thought suggests that if you feel the need to add a comment, you should instead lift out the code and make it into a function with a descriptive name. But here that would mean adding several new functions and passing a lot of variables around. Each new function would need to be given the original list of vertexes, the new condensed list, the list of triangles, and the normals it’s building. That’s a lot of crap to pass around, and a lot of times to jump in and out of functions. For a large model, this could introduce a very real and perceptible performance hit.

This would add many lines of code and someone trying to read this thing would need to jump around the source to follow the flow. I’m not sure how you could turn the comment on line 27-29 into a descriptive name without throwing away a lot of information. It will probably be called WeightNormalsByAngle () or somesuch.

Later, the Other Programmer will be digging through the source, looking for some problem related to specular highlights not working right. They will find WeightNormalsByAngle () in the source file and mistake it for something related. “Is this what I’m looking for? What is this? What calls it? I guess I’d better do a search, see where this is called, and then figure out what it does.”

So now you’ve got a larger more confusing source file and you’ve removed some crucial information. But hey: You saved someone having to scroll past three whole lines of comments! You big, damn hero, you.

In a broader sense, we’ve got two philosophies:

  1. Write code in such a way that it can’t possibly be misunderstood.
  2. Assume misunderstandings are inevitable and work to mitigate them.

I’m much more inclined to favor the system that allows for mistakes than the system that encourages and requires perfection.

The drawback here is that coders are stereotypically rubbish at writing comments. If you tell them to write comments, you’ll get a lot of crud like my comment in line #11 above: needless screen-consuming little post-it notes that obscure the crucial by announcing the obvious. Still, if a coder is bad at writing comments, I imagine they will be even worse at self-documenting variables and function names. Brevity is the soul of wit. If you can’t say it in five lines then you’ll never fit it into twenty characters.

I realize this is probably less of an issue if you’re doing things that are self-explanatory. But sooner or later you’re going to do something tricky or cryptic. I don’t care what you name your variables or how you lay out your function names, if you don’t leave a note for the Other Programmer then you are sowing ruin. The cost of deciphering something tricky is only slightly less than the cost of writing it in the first place. The comments in the above function have saved me a ton of time over the years.

If you’ll indulge me in a little argumentum ad verecundiam, I’ll point out that John Carmack, the lead programmer at id Software, one of the giants of the industry, certified genius, software inventor, and author of the guide we’re discussing, is apparently generous with his comments. I haven’t looked at the Doom 3 source myself, but it’s rumored that almost a third of the lines are comments. If a guy that smart needs comments, how much more will us mere mortals need them?

Seriously. Don’t listen to the haters. Comment your freaking source.

To be continued…

 


From The Archives:
 

123 thoughts on “Coding Style Part 3

  1. Drew says:

    When I end up with “dumb” comments like your line 11 example above, it’s usually an artifact of me laying out the process in comments, then writing in the code to get it done. I find that quickly laying out the relevant logic in comments prevents me from jumping back and forth between code and reference material (e.g., wherever you got the procedure for shading) and also puts me in a position where if I’m torn away in the middle of writing some code, I know what I’m doing and can pick it up from where I left off.

    Sometimes I get rid of those self-instructional comments later, but generally I feel they do no harm, and in your example, while it’s not particularly necessary, it allows someone to read through your entire procedure by looking only at the comments, instead of jumping back and forth between comments and code, and I think there’s value in that anyway, so even the “dumb” comment isn’t totally useless.

    There are those who might accuse me of overcommenting, but I think there’s a lot of value in comments that allow someone who’s inexperienced (either overall, or with the specific language used) to read through the code and understand it.

    1. nmichaels says:

      The danger with that is that the code will change over time and the comments will drift out of sync with the code. One of the worst things to stumble across when fixing bugs is code like this:

      // Add 1 to the pointer.
      next = *(p += 2);

      For people who don’t crunch C, that code adds 3 to the thing pointed to by the pointer.

      You’re probably right about how that comment came to be. I like to outline my algorithms in comments too.

      1. Khizan says:

        Was the “adds 3” thing a typo, or is there some arcane bit of C magic that I am missing?

        1. nmichaels says:

          It was a typo. I changed it in an edit and unintentionally reinforced my point. Duplicating information is bad.

          1. Felblood says:

            This is hilarious.

            Also for those who missed my rant in the other day’s comments.

            UPDATE YOUR COMMENTS WHEN YOU EDIT YOUR CODE!

            Otherwise your comments will be as useful as those above.

            A little psuedo-code outline can be a lifesaver, connecting a flash of inspiration to to hours of precise coding, but remember to replace it with something with long term value, before you close the file to go home.

    2. Neko says:

      Yes! I find that when I’m writing any non-trivial code, it’s so much better to write the comments first, then proceed to flesh out the function. Doing it that way forces you to consider “what should it be doing?” before you get into implementation detail.

      Of course comments like “add 1 to x” aren’t helpful, but the idea is you won’t have comments like that if you haven’t written the implementation yet! You’d have no ‘x’ yet, and would need to write something along the lines of “move the character one cell to the right”.

    3. LintMan says:

      Yes, sometimes “self evident” comments make sense if they are part of a greater set of documentation. Let’s say you’re calculating something that is fairly complex and has many steps. Documenting ALL those steps with a comment can be very useful even if a particular step seems self evident, like in the normalize function above. If you don’t feel it’s worth dedicating a line for it, you can always tack it on to a previous comment:

      // Convert the 3 edges of the polygon into vectors then normalize them

      As far as “comments indicate a design flaw” – wow, that’s a level of douchebaggery I’ve never encountered. People who think that must not have ever worked on complex systems, or they are working in an environment with tightly controlled external documentation. Yes, inteligent helpful variable and function names are great, but they are not always sufficient. NoOneLikesTypingInNineWordFunctionNames() and so they UseFourWordNames(), which aren’t really enough to fully cover what’s going on. (Neither is nine words sometimes). Even when those descriptive names capture the gist of what’s being done, they don’t always give the “big picture” or “WHY” of what’s being done.

      And knowing the “WHY” can be critical to the person coming in later to debug or update or enhance the existing code. Things like “we’re doing it this way because we had performance issues and this should be faster” or “State 4 of this state machine must always be entered before we can move to State 5”. The latter might be implicitly enforced in the code, but a later developer might not know that it MUST be that way unless it is documented in the comments.

  2. Adalore says:

    Esp for newbie level programming, just giving yourself comments to keep what you are doing straight is handy.

    At least that’s what I found with the Udacity class I took.

  3. Thom says:

    I know this was coded a long time ago, but the comment on line 11 **COULD** be a side-effect of the number of comments in the rest of the sample. If that comment wasn’t there, those three lines would be the only undocumented logic in the sample…. well, except for lines 3 thru 6. Nevermind…

    1. Lanthanide says:

      Yip, that’s my interpretation of it.

      I don’t have any problem with the comment on line 11. Sure it is redundant, but it’s also not doing any harm – my alternative would be a blank line, rather than cramming it directly under the block above. Looking from the rest of the code, it appears blank lines are discouraged, so this seems like the alternative – put in a null comment to serve as a blank line. Also having each logical step commented is helpful, because if you see one that isn’t commented it can make you start to wonder if perhaps someone has deleted it, and if so they might have deleted important code along with the comment.

  4. nmichaels says:

    Comment your freaking source, but only where necessary. The comments on lines 1 and 11 are bad because they take extra time to parse, then verify. The argument about function calls is specious for 2 reasons. First, compilers know how to inline functions. Second, really long functions take multiple cache lines and can actually be slower to run because of all the cache thrashing involved.

    I have simple rules for comments: Comment the what and why, not the how and if the comments and code disagree, both are wrong.

    Since the compiler and my tests verify the code, it’s more likely to be correct even after being changed. There aren’t any automated tools that verify the correctness of the comments, though. So as long as I keep their scope limited to why the code does what it does, I don’t have to deal with out of date comments, which are worse than useless.

    1. Esteis says:

      All comments take time to parse and verify. The “˜normalize the vectors' comment is near-instantaneous to verify, because the piece of code it describes is so expressive. I think this is a merit of the code, rather than a shortcoming of the comment.

      “But what,” I hear you cry, “will stop people from making “p = p + 1 // increment p” comments?!” A valid concern, but even when code is (or seems) self-explanatory a comment can be helpful. Firstly, like Wedge says, this comment can help you pick out (or skip over) the lines that deal with normalization. Secondly, like Scott M says and shows, this comment is part of a greater body of comments: together, these comments outline the algorithm in this block of code, and the normalization step is part of it. Drew says much the same thing: “it allows someone to read through your entire procedure by looking only at the comments, instead of jumping back and forth between comments and code.”

      As for your concern about comments going out of sync: I understand the theoretical concern, but in all the code I’ve written and read I’ve never encountered the problem. I think it’s because most comments are about “˜what and why', as you say, rather than “˜how', and so they don’t fall out of sync when the implementation changes.

      Too long; didn’t read: comments that look like they’re repeating the code may simply be “˜what' comments with particularly expressive code. These comments may look redundant, but usually still have value as section markers and as part of the greater scheme of comments.

      1. nmichaels says:

        You have been luckier than I in your codebases. I have definitely run into out of date comments, and the ones I can remember have been exclusively “how” comments.

        The most frustrating thing, for me, about bad comments is that they’re so frequently in places where I really want to see good comments. I don’t really think people should write fewer comments, despite what I said. I think people should write better comments.

  5. Magdain says:

    The entire comment debate is like one horrible game of Telephone; Neither side seems to particularly understand the other. The pro-comment people think the other side wants all comments dead. The anti-comment people think the pro-comment people are teaching everybody to comment variable assignments with “// set variable to value”. I’ve found both of these to be false.

    I’m more on the minimal comment side of things, and my general rule of thumb is that comments should explain WHY you’re doing something rather than WHAT you’re doing — The code already explains the what. There will of course be exceptions (and in particular large blocks of math are a good example).

    And no matter where you fall, the most important thing is to write code that’s maintainable and consistent. If you’re the only person maintaining your code and you understand things better with lots of comments then the answer to the debate is pretty clear.

    1. Shamus says:

      It’s probably the result of extremists on either side.

      Alice: Over-commenting code is actually damaging to readability.

      Bob: Yes! Eliminate the comments!

      Carl: Actually, comments are an important aspect of code and leaving them out is dangerous to productivity.

      Dorothy: I agree! The more comments the better!

      Alice and Carl might differ slightly with regards to how & when to comment, they’re both reasonable people who probably agree on the essentials. Bob and Dorothy are prone to orthodoxy, absolutism, and think that the discipline can be perfected if we just devise the right set of rules and make everyone adhere to them.

      This is probably true of a lot of debates.

      1. nmichaels says:

        Stupid extremists. We should have them all shot.

        1. What are you, some kind of bleeding-heart nancy-pansy lib’ral? Shooting only takes a second or two; it’s much too good for extremists. Crucifixion is the way to go.

      2. Wedge says:

        One of the things that I think contributes to this is lack of good education. Note that I don’t mean education in the formal sense of “everyone should get a CS degree if they want to program”, I mean that so many instructors/books/tutorials teach you THAT you should comment your code, but they rarely teach you HOW you should comment your code. So you end up with a lot of newbies (and even experienced programmers) who think that commenting your code means writing “var++; // increment ‘var'”.

        These people end up being the kinds of people who feel like they “should” be commenting their code but don’t do it, like commenting your code is the programmer’s equivalent of going to the gym. If that’s where you are and someone comes along with a “never comment anything! code should be self-documenting!” philosophy it can be really tempting to jump on board, because you realize that what you think of as commenting code is completely useless and actually counter-productive–and it is! The problem is that you’re throwing the baby out with the bathwater–GOOD comments (like your example above) are not only not very onerous to write, they assist IMMENSELY in understanding your code.

        On that point, I will say that the code you gave above is an EXCELLENT example of both good commenting style and why that commenting style is so helpful–without comments that large function just looks like a wall of code and can be hard to decipher even if you’re the one who wrote it. Adding comments to explain each of the logical sections of code makes it easy to understand at a glance what’s going on and makes it a lot easier to find your way around when you’re trying to debug. “// Normalize the vectors” above may LOOK useless, but imagine you have a bug that’s caused by improper normalization–for example, say you forgot the line “edge[2].normalize ();”. You’ll see bad program behavior, think “oh, this seems like the normals aren’t being normalized correctly” and go to this function, and when you do having that comment makes it easy for your eyes to scan down to the part where that’s happening and realize your mistake quickly.

        1. WJS says:

          “If you teach people something wrong, they’ll rebel against doing it” That reminds me of Hungarian.

      3. Brandon says:

        This is a tangent, but it is absolutely true of most debates, especially big ones that entire communities get in on.

        for example, look at the gun control debate. Both sides are suffering from this. One says “we need to have better gun control and regulation” and the other side has people saying “They’re trying to take away all our guns!”

        The other side says “We don’t need better gun control, it’s our right to bear arms, the constitution protects it” and the other side says “They are a bunch of gun nuts, they want to be able to privately own missiles and grenades”

        It’s no wonder it takes so long for society to agree on anything, we always seem to talk past each other.

        1. Asimech says:

          A lot of arguments are semantic arguments where neither side has realised that it’s a semantic argument.

        2. It may also often be the case that sterile long-running arguments which appear to have two sides actually represent neither side dealing with underlying issues. I feel the gun control debate is like that, but I realized my argument about why is intensely political so I’m leaving it out.

      4. HiEv says:

        I like this comment.

        +1.

        Would buy again.

        I hate it when I say something moderate and someone interprets it as the extreme version of what I’m saying, and (worse yet) then argues against that straw man version of what I’m saying.

    2. Khizan says:

      Part of the problem comes from intro classes, imo.

      Teachers want to drive home the “WRITE COMMENTS DAMMIT” point early, before bad habits have formed. Unfortunately, the code the early classes are writing is so simple that it doesn’t need comments. There’s nothing unclear there to comment, but the teachers are demanding comments.

      This leads to a lot of crap comments like having a function named “findCircleArea” followed by “// This function finds the area of a circle”, or “avg = total / count; // finds the average”.

      And so fledgling programmers tend to fall into one of two schools early, in my experience. Some of them decide that comments are useless because your code should be clear without it, because that’s absolutely true of intro crap assignments. And some of them decide that you must COMMENT ALL THE THINGS, because the teachers require it and so it must be the right practice. And these people are, of course, the loudest ones.

      Myself, I’m just starting the ‘serious’ CS classes, but I’ve been playing MUDs for years and writing systems and scripts and such for them. And so, quite early, my allegiance went to the school of “If you think the code will take more than 5 seconds to understand when you’re looking at it 3 years later, after not having glanced at it since you finished it, you should probably comment the damn code so people know what that brick of math does without having to go reverse engineer it from variable names and operations.”

      1. nmichaels says:

        Some of it is a relic of olden times. I had a professor for an assembly language course who demanded that every single line have a comment on it. I didn’t object because assembly language (we used olde timey Intel instructions, which are particularly horrible) is really hard to read and follow. There are no variable names to make clear and the closest thing to control flow that wasn’t goto was “if this bit in this register is not 0, goto label foo; else goto the next line” in a single 3 character instruction.

        That said, I’ve since written code in much nicer assembly languages and used the C preprocessor to get nice(r) names for things. That code doesn’t need as many comments, but it’s still at least 5 times as heavily commented as my C code.

    3. Stephen says:

      I think the anti-comment brigade are coming from higher-level languages that need fewer comments because the code is more expressive. Then they come down the foodchain to the guys working in C-type languages and naturally assume their wishy-washy expressive code patterns will work down in the trenches where everyone is just pushing bytes around.

      1. kmc says:

        I imagine you’re right. I know I don’t put nearly as many comments in my Python code, because first of all, it’s for me, and second of all, I can read it like a sentence. I still put higher-level comments, to describe the why of the section, but I don’t bother to write in things like, “Loop through the inputs and change the parameter values.” when it’s followed up by:
        def changeParams(*args)
        for each_arg in args:
        ModuleLib.changeParam(each_arg)

        But in C#, when I don’t have that kind of freedom, or in MATLAB, where it’s often one big script and we’re doing complicated, clever things with math, comments are so, so important, and even if you can only think of a bad one, it’s at least as good as not having one.

  6. Scott M says:

    As an experienced programmer, I like the comments in Shamus’s example, including the “dumb” ones. The fact is, when I’m reading through unfamiliar code (and that includes anything I wrote earlier than yesterday) I want to quickly and easily understand what it’s doing, and I’ll always be able to parse (concise, intelligible) English faster than C++. That’s the beauty of comments showing up in a different color; I look at the above code in an IDE, and if I’m scanning it for a basic understanding all I see is:

    For each triangle…

    Convert the 3 edges of the polygon into vectors
    normalize the vectors
    now get the normal from the cross product of any two of the edge vectors
    calculate the 3 internal angles of this triangle.
    etc…

    Which makes it very easy to understand the algorithm at hand without being worried about code at all.

    1. X2-Eliah says:

      I tend to agree with this. That ‘code should be self-explanatory’ camp still is not selling me on the idea that code.. any code.. is *more* legible than plain english. And I do like when the intended functionality is also described/mentioned along with why etc., because in case of a bug, you can at least tell if the issue is faulty implementation of a valid idea, or a faulty idea applied as the solution. Going through code logic, even if it is all single-content functions up the wazoo (which seems to be implied as the best and greatest solution on these comment parts.. wtf. How can a hundred-function super-split code be simple and instantly legible..), is still going to take longer than glancing at just the comments to see what basically happens. Once you know that, then you can decide whether to bother wasting time on checking the code or not.

      1. Zukhramm says:

        I have a hard time imagining there not being situations where code is more legible than English.

        1. X2-Eliah says:

          Then I wonder if you are a human.

          1. Esteis says:

            You may have misread Zukhramm’s post (I know I did at first, it was the double negation that misled me): he’s not saying “anything may be expressed more clearly in code than in English”, but “there must be *some* things that are clearer to explain in code than in English”. Which was in response to X2-Eliah, who said “[I don’t believe that] code.. any code.. is *more* legible than plain English”.

            The assumption that they are both reasonable people leads me to conclude that the “˜information to express' they are each thinking of when they say “˜English is [always/sometimes not] more expressve than code' is different. Take, for example, the quadratic formula (a.k.a. the abc-formula): I’d want its purpose written in words, the formula itself written as maths, and its derivation written as a mixture of both. (This example is from mathematics instead of from programming, but the principle is the same.)

            1. X2-Eliah says:

              Good point. Though. See, we sort of need to compare the two things – both code and English-comments – at the same level, so to say. Zuk’s assertion may work in cases where [clear, near-perfect code] is mixed with [messy, unclear comment language]. I admittedly tended more to the opposite case, where [clear, well-defined language comment] is together with [complex, convoluted / large code] (simply because I’ve seen a lot more instances of that situation than Zuk’s).

              But, let’s compare both being on the same level. A [clear, near-perfect code] and [clear, well-defined unambiguous language comment]. I would definitely find the written comment to be more readable and parsable for meaning than code.

              In fact, I really should have added a qualifier to my original statement: “I can’t imagine a case where code.. any code.. is more legible than a comment explaining what it does in plain English, as long as the language is clear, legible and unambiguous. At similar levels of conciseness/neatness, I would always prefer the comment over the code.”

              1. Zukhramm says:

                My thinking was that mathematical expressions are much more legible in mathematical notation than in English, especially when things get complicated. Sure, that’s not the exact same thing as programming, but it seems to me reasonable that a system made to write a specific subset of all possible descriptions can be more clear for that subset than such a general purpose language that English is.

                1. Cineris says:

                  I’m not going to claim this is true in all cases, but in the vast majority of cases any programming task is articulated as a series of steps in natural language (e.g. English) before any code is written. Natural language is almost always the superior method for communicating with other human beings.

    2. MelTorefas says:

      I heartily agree with this. Most of my “programming” experience comes from SC2’s trigger editor, to be fair. But you can do some pretty complex stuff there, bordering on actual programming, and in such cases clear comments (even on the stupid obvious things) essentially give you an English language version of your algorithm. Which proves very useful when trying to figure out how the trigger works months later.

  7. Ross Smith says:

    The MixedCase vs lower_case names argument is really a Windows vs Unix thing. Early Unix code used lower case with underscores, early Windows code used mixed case; presumably these just reflected the personal preferences of their respective developers, and most later programmers naturally followed the conventions established by their operating system’s documentation. The conventions ended up baked into later languages; the C++ standard library uses lower case names everywhere (because C++ was invented at AT&T, the home of Unix), while Microsoft’s C#/.NET uses mixed case.

  8. somebodys_kid says:

    I would definitely err on the side of too many comments rather than too few. Though that doesn’t account for the issue of incorrect comments or out of date ones.

  9. Brandon says:

    IIRC the main points of the “eschew comments” recommendations I’ve heard are:

    1) Don’t use comments as a crutch to lean on when you write horrible unreadable code.
    2) Comments aren’t checked by the compiler or at runtime, so they tend to get out of sync with the code.

    Arguably both of these are non-issues if everyone on a team is diligent. (but are they?)

    1. Felblood says:

      bool YourAnswer(You, teamSize){

      if (teamSize > 1){
      Return = false;
      }
      else{
      Return = You.diligent;
      }
      }

      1. Felblood says:

        I woke up last night to the realization that I hadn’t declared types for for You and teamsize.

        Then i realized that this code was being read by humans and not a compiler, so it wouldn’t matter.

        This is why it’s a good idea to not code right before bed, kids. Unless this is a desired effect, of course.

  10. Factoid says:

    I don’t think your comment on Line 11 is bad at all. Sure it’s plainly obvious what is going on, but if you eliminated that comment you end up merging the step where you normalize into the step you convert your edges to vectors. It makes that section longer.

    Now certainly you could just amend the comment on line 7 to include “and normalize” at the end, and it would still be clear, but I think there’s something to be said for breaking a function into bite sized pieces even if you end up with inane and obvious comments.

    I had a professor in college who used to remind us “Comments are free. Actually comments are better than free. It’s like giving your future self the gift of 5 extra minutes.”

    1. Esteis says:

      I like your professor.

  11. Rodyle says:

    About the “if code is unclear, make it a function with a self-explanatory name”: I got in a row with a teacher about this the other day. It was a system development course, focusing on things like GRASP and and design patterns. When I pointed out that sometimes a bit of code is unclear because it’s an arcane process, or what I for example have with my bachelor’s project is that if I were to write a lot of functions for everything which is hard to understand without comments, my code would slow down way too much.

    He did not agree. Apparently readability is preferred over usability by some coders…

    for those interested:

    I’m doing a thesis about thermodynamic models of transcription. For each of the genes in an agent, I have to call the update method until it has approached an equilibrium (or a cyclic attractor is found). This means that my update functions gets called quite a few times for each gene. And for each gene, I have to go over a bit of code which has to do calculations equal to the length of the gene’s promotor (500+) bases squared. If I had used all seperate functions, it’d probably give me around 1.250.000 to 5.000.000 extra function calls (or even more, depending on the length of the promotor). Per update. Per gene… And since I have an evolutionary process which has to run over a few million generations, it’d have taken me a lot longer to run them.

    1. Rodyle says:

      I could use inline though. However: seeing how inline is seen as a helpful suggestion rather than an order by most compilers, this might not work.

      It might be worth a try though…

      1. Rick C says:

        “seeing how inline is seen as a helpful suggestion rather than an order by most compilers”

        Absolutely worth testing.

    2. EmmEnnEff says:

      Have you benchmarked your code with, and without the optimization? You should absolutely do so before dismissing away function calls as too heavy. It’s not like you’re copying the world – you’re just passing references around.

      Write-only code for the sake of premature, unbenchmarked optimization is (In the eyes of most engineers) a pretty serious sin.

      1. Rodyle says:

        I’m not really ready to go into optimisation mode just yet. I still have to generate some useful output and do some other minor stuff before I’m really ready to do that.

        And no, it’s not really write only. I’ve done my commenting; it’s clear what happens. I’m not doing bullshit stuff like using Duff’s device without saying what’s happening (actually saw that happen in an old piece of code lying around here).

      2. Anonim says:

        The performance impact of function calls can be much more damaging than simply copying the arguments around. CPU’s have these little things called cache memories and pipelined execution units. Every time your code does not behave nice (i.e. every time you access non-contiguous memory) you risk invalidating the data in the cache and flushing the pipeline. These have dramatic effects in performance. If you are worried about performance, you have to also consider what happens under the hood at the hardware level. In general, minimizing function calls is good for performance (it might be bad for other things though, namely for readability or code maintenance).

    3. Alan says:

      Part of the problem is that for every 20 programmers who are certain their code is speed sensitive, only 2 are correct. And one of the two will fuck up the optimization because modern optimization is hard. (Profile before and after optimizing!) Erring against optimization is good advice.

      1. Rodyle says:

        Well, seeing how early runs with my as of yet unoptimised code took over a week for a simulation, I’d say that I’m part of that 10%. I will run valgrind over it when I think the code’s ready for it, but early tests indicate some optimisation may be feasible.

        1. Alan says:

          General advice to people doing writing software that takes days, weeks, or longer to run: can you solve the problem by throwing moer computers (or their equivalent: money) at them? My experience working with researchers is that a lot of them write software on the assumption that it will run on one computer (and in many cases on one core), when their problem can be broken up into smaller bits and run in parallel. So instead of running on a cluster, they spend weeks of their lives optimizing code, creating code that is harder to work on and more likely to be buggy, all to, for example, get their processing time down from a year to 6 months. They frequently believe that cluster computing is too expensive, but you can get a year of CPU time (over the course of a single day!) from Amazon EC2 as cheaply as $600; and I suspect a few weeks of your life are worth more than $600. (It may run more depending on your needs, it may run less if you’re willing to wait for cheap time.)

          I am glossing over the cost of learning how make use of such systems, and it’s not trivial. Hopefully your job/institution has people who can help, and the knowledge gained will likely continue to pay off for future work.

          Now, problems like Rodyle’s, iterative ones, range from hard to impossible to parallelize. Although that said: do you need to run your process repeatedly? Maybe each individual run will take a week, but if you need to run it 52 times? That’s starting to look parallelizable again.

          1. Anonim says:

            From his description, it appears each gene is treated independently. How many genes will be processed? 2? 4? 10? 100? That looks like an easy place to start parallelizing. Also, inside each gene he has to do a number of computations equal to the square of the size of the promoter (which is a big number). These computations probably only depend on the previous state and not from each other, so, they would also be a good candidate for parallelization.

            1. Rodyle says:

              I was thinking about parallelizing genes, but I did not do it eventually because I don’t really want to touch that part of c++ yet; not enough knowledge.
              And I’m afraid the calculation within a gene cannot really be parallelized. It’s a recursive algorithm.

              If you’re very interested in the specifics of it all: it’s basically a version of the GEMSTAT software by Sinha et al in this article, but more geared towards evolution rather than data fitting.

              1. Anonim says:

                About the computations inside the gene:
                You said you need to do a number of computations equal to the square of the size of a promoter (whatever the heck that is). Is this the number of times the recursive function is called? Or do you call the recursive function for each of the (pair of) elements that make up the promoter? If the answer is the former, then you are screwed (unless you change the algorithm==hard). If the answer is the latter, then each (pair of) elements of the promoter will have to execute the recursive code. These computations can be done in parallel (assuming they have no other dependencies).

                But always remember, the decision on where to parallelize the algorithm depends on both the algorithm AND the hardware you have available. There is no point into splitting your algorithm into 1000 perfectly parallel pieces if you only have four cores available.

                Alas, I have no time do delve into the fascinating world of DNA analysis and its multiple and annoying acronyms and naming schemes.

                1. Rodyle says:

                  What I do:

                  I walk through the promotor (it’s basically a sequence consisting of four possible characters). For each character, I have to check for a couple of proteins how well they can bind there. Then I take the sequence up to that point, and sum the same thing of all possible sequences up to that point which do not overlap with it and do a few calculations with that. I don’t think it’s really possible to do this in parallel.

          2. Rodyle says:

            The problem with parallelization is that I need multiple runs for each parameter setting to reach any form of statistical rigour in my data analysis. Therefore, it’s just as good, no even better, for me not to spend time parallelizing my code, but instead spend that time optimizing bottlenecks and filling up 12 cores of one of the fast computers on our network (bachelor thesis, so I’m afraid I don’t have any budget) with multiple runs.

            1. Alan says:

              As may have become obvious, my job is helping researchers across a university campus get compute resources. (More specifically, helping researchers get research done by ensuring that “lack of compute power” isn’t the bottleneck.)

              “The problem with parallelization is that I need multiple runs for each parameter setting to reach any form of statistical rigour in my data analysis.”

              If you’re filling up 12 cores on one computer, you’ve already got some parallelization! Parallelization doesn’t require tight binding between the process, and indeed I recommend against it if possible; the complexity frequently isn’t worth the effort. Wild guess: your program is single threaded, and you’re going to harness 12 cores by running it 12 times simultaneously? That is, the end result isn’t that instead of a single run that takes a week now finishing in half a day, instead you’ll still be waiting a week, but at the end of the week you’ll get 12 results instead of 1. (If not, think about it. Tightly coupled parallel programs are far harder to write and debug, and more fragile if you try to scale past a single computer.)

              Once you’re that far, scaling up is relatively simple, at least until you run out of combinations of parameters and repeated runs that you want. If you’re Linux saavy (again, recommended), this can be as simple as firing up some EC2 instances and manually SSHing into them to start up your jobs. (If you want something more a bit more automated, there are a variety of tools, many free, that can help. I have thoughts, if you care.) Sure, it’ll run $11 per week of computing time you save (assuming EC2’s “small” instances), but how much of your time will it cost to save that computing time through optimization?

              Finally, I recommend nosing around your university and seeing if there are free compute resources available. Some universities run HTCondor (aka simply Condor) on their workstations and make the cycles available for free. You might be able to get access to the Open Science Grid. And some particularly enlightened universities provide free-of-charge cluster computing to any campus researcher.

              1. Rodyle says:

                True about the parallelisation. I’m not too sure about nosing around the campus though; I think our campus-wide IT is run by a bunch of monkeys; computers take about 5 minutes to log into windows, 5 more if you’re going from startup and don’t save your personal settings (every time you open anything, it thinks you do it for the first time). The Linux partitions are a bit better, but we’re still dealing with highly outdated versions of almost everything.
                We have a pretty good guy within our bioinformatics, and I think we have ten or so 12+ core computers lying around on which to do the heavy calculations. However, since they’re for the entire department, I don’t know how long I’d be there if I used more than 12, maybe 15 cores.

                I love your advise though. I’d happily check out EC2 as soon as I start on a bigger project in my master and get some form of budget.

    4. nmichaels says:

      That’s what the inline keyword is for.

  12. Jabrwock says:

    “This particular rule needs to be followed if you don’t want a bunch of annoying warning flags every time you compile.”

    So it’s a compiler-enforced coding style. ;)

    1. McNutcase says:

      The thing is, compile-time warnings aren’t fatal unless you tell the compiler to make them fatal, so it’s by no means uncommon for sloppy coders to get a fairly reliable binary that threw a LOT of warnings on compile. A depressing proportion of coders actually suppress compiler warnings. So the “explicitly mark floats” convention does actually have a purpose, since it’s by no means certain that a given coder will see the warnings the compiler throws.

      And as for warnings in shipping products, well, start a game of Half-Life 2, look at the console, and notice just how many warnings the game threw as it loaded the map. Then think about the fact that Valve has a reputation for shipping low-bug products. Warnings are warnings, not errors…

      1. MikalSaltveit says:

        Warnings hide errors. Check the following bit of code

        public void PrimaryGameLoop ( ) {

        bool SUCCESS = MoveCharacterForward();

        }

        WARNING: SUCCESS was declared but never used.

        Either SUCCESS is something you need to check for, and handle when it returns false. Or it has no meaning, in which case it should be removed, least another programmer try using it.

      2. MikalSaltveit says:

        Warnings hide errors. Check the following bit of code

        public void PrimaryGameLoop ( ) {

        bool SUCCESS = MoveCharacterForward();

        }

        WARNING: SUCCESS was declared but never used.

        Either SUCCESS is something you need to check for, and handle when it returns false. Or it has no meaning, in which case it should be removed, least another programmer try using it.

        1. Felblood says:

          –Or it was used by a debugging function that didn’t go into the end user product, but is valuable to leave in the engine codebase for future use, or not worth paying someone to strip out of the market version.

          Assuming you have seen the Entire Elephant is Unsafe.

          1. MikalSaltveit says:

            Then the proper way to write it would be
            if ( !SUCCESS ) {
            Debug.Log(“Character unable to move!”);
            }

            Except in my example case, when the character is unable to move it dies. Not handling that does not throw an error, but a warning. The point being warnings can hide errors in programmer logic, which are far more insidious than just unused variables.

      3. I believe this document was written well before Carmack saw the light when it comes to SCA (Static Code Analysis) and so I wouldn’t be at all surprised (especially if you’ve ever tried to compile a middleware product in the gaming space with warnings as errors set or even gone hardcore and done something like -Weverything to really ask the compiler to be vocal about anything possibly broken at compile time – maybe this has all changed in the last few years as gamers have stopped accepting random crashes with quite so much good grace) if they compiled with warnings ignored or even silenced. The existence of warnings about implicit casts from doubles to floats might have been completely masked by hundreds (or literally thousands) of other warnings so this was needed as a rule for coding style.

        Also (far more importantly), this was a C document before it was a C++ document and C is far more weakly typed (or C is weak vs C++’s strong depending how you define those two words as a continuous scale or two points with an arbitrary split point) and so the compiler typically is too to support this.

        A simple test program with “float c = 1.0; return c;” inside main will compile with clang (GCC results should be close to identical; Windows compilers may vary) without warnings on default and even with -Weverything the warning given is a -Wconversion on the float to int on the return; there is no warning generated in C from a double literal to a float conversion so it is a style guide mandate to make code more clear when using C (and the comments in the document indicate this is so you can do double maths and cast to a float or float maths and assign to float and those are clearly different and coder intent based on use of literals), not a compiler warning remover (as it is in C++) that everyone should be doing to write clean code.

      4. nmichaels says:

        I think the key here is that sloppy coders ignore warnings. Don’t hire sloppy coders.

  13. Brandon says:

    I find people who argue that everything should be broken up into easy to understand code bites that need no commenting tend to be the sorts who use the really high level coding languages that obscure a lot of the guts of the language to begin with.

    As soon as you are working in the lower level languages, you have to do more commenting. Someone else mentioned how assembly language almost requires a comment on every line to be even remotely understood, and it’s true. The “write self-explanatory code” paradigm only works if the coding language is sufficiently high level to support it.

    C++ is a bit tricky here because you can write it like a high level, heavily object oriented language, but arguably it isn’t one even though it has those features.

    1. Kdansky says:

      Good point! When you write functional code in Scala or Haskell, breaking your stuff up into three-liners without comments makes sense, because those three lines do a ton of work by means of sophisticated tools like mapping higher order functions to data structures, but can be explained easily on a high level (“This returns the closest Locations on the map to a specified point”).

      On the other hand, when you actually implement a low-level algorithm in C++, your only tools are low-level, and you need to explain why you copy seven bytes from here to there.

      1. Brandon says:

        Yes, exactly! Someone who argues that comments are unnecessary has never encountered pointer stew in C/C++.

  14. Mike C says:

    I’m not sure about C++ in Visual Studio, but in C# and VB.NET you can turn on an option to automatically generate an XML documentation file for your code. Then when you type in three comment slashes in C# /// or the VB.NET equivalent ”’ in front of a class or method, it will automagically generate a documentation template for you, and all you need to do is fill in the blanks to tell what the class / method does, and what the parameters do, and what to expect as a result, and so forth.

    The code documentation file that gets created at compile time can then be fed into something else to generate a document (Word, HTML, whatever) for other programmers. This is particularly handy for API developers, but it’s also good for other forms of code.

    For example:

    ''' <summary>
    ''' Restores the Value that was present when the control had received focus.
    ''' </summary>
    ''' <remarks>This method is typically called in the control's Validating() event when you set e.Cancel = True.</remarks>
    Public Sub RestoreValueOnFocus()
        Value = _valueOnFocus
    End Sub

    1. Kronopath says:

      Unfortunately it’s not available in Visual C++. Oh how I wish it was though.

      1. Alan says:

        There are other options. I’ve used Doxygen in the past and quite liked it.

  15. Abnaxis says:

    I think it’s because I learned OO programming in Java (and because I use Java a lot in work) but I always do a double take whenever I jump over to C for a bit and see functions starting with capital letters. I’m used to those being reserved for classes/constructors. It’s weird how little things like that can throw you off.

    1. Ithilanor says:

      Agreeing with this; most of my experience is with Java and C#, so I’m used to functions starting with lower-case letters. What languages besides C++ usually have upper-case CamelCase function names?

    2. Neko says:

      All the C++ I’ve ever done, it’s been the same; LeadingCapitalsForClassesAndStructs, underscored_names_for_methods_and_members. (Unless you’re using library code that does camelCaseMethods, naturally)

      When I tried C# out, it was really off-putting to see all the CapsCase method names everywhere.

    3. silver Harloe says:

      The Sun standard for Java is UpperCamelCase for classes and interfaces; lowerCamelCase for methods, attributes, and variables. I take it the MS Standard for C++/C#/C is UpperCamelCase for classes, interfaces, and methods; lowerCamelCase for attributes and variables?

      Funny how that one shift in a column can throw everything off. Of course, it’s all painful to me who prefers lower_spaced_with_underscores :)

    4. Cuthalion says:

      This is my experience as well. I’ve dabbled in C++, but most of what I’ve done has been Java, so I’m really accustomed to lowerCamelCase for methods (which I still call functions because of the order I learned things in).

  16. Noah Gibbs says:

    One reason for the “Render” versus “unexported_” style was certain weird custom linker tools. We did some of this at Palm where “exported” was kind of a big deal, partly because it meant you couldn’t do dead code elimination even if the function deserved it. We counted bytes pretty aggressively at the time.

    Basically, you can change the default C linker and have your linker enforce that automatically, which is actually pretty cool. Unused unexported funcs go away, exported funcs are automatically picked up and linked into a shared library. The latter is extra-cool on Palm where building the files yourself was kind of a pain at the time.

    Down-side: you have to write custom linking tools and add flags to replace the default C linker.

  17. silver Harloe says:

    [code]
    for ( key in actions[‘replace_html’] ) {
    $( key ).replaceWith( actions[‘replace_html’][key] );
    }
    for ( key in actions[‘timeout_html’] ) {
    // notice outer anonymous function is immediately called, so timeout_fn actually contains the inner anonymous function.
    // this closure preserves the current value of key for later, and timeout_fn has to be a function to make setTimeout() happy
    timeout_fn = function( key ) {
    return function() { $( key ).html( ” ); };
    }( key );
    setTimeout( timeout_fn, actions[‘timeout_html’][key] );
    }
    [/code]

    pulled from one of my functions I was just editing today. I’m a “comment minimalist” but hard to read is hard to read.

    1. silver Harloe says:

      awww, I hit “request deletion” and it got unmoderated into a post. I was more playing with formatting than having anything useful to contribute, sadly.

  18. Amarsir says:

    Although using descriptive function names is a form of documentation that no one could really argue with, it’s also easy to goof up unless you step back for a larger view every so often.

    For example, just this morning I was working on a function that displays financial data for a user, from local memory if it exists or via a remote fetch if not. The function I called “viewStatement.”. (Oh yeah, I lowercase the first letter of functions. I’m not sure why I started, but Variables and Classes I don’t.)

    Anyway, viewStatement starts by checking what’s stored, and if it’s not there it calls fetchStatement. Either way, when the data is available it needs to be put ot the screen. Which, for logical organization I put in another function. Which I called showStatement. And this seems perfectly reasonable in the moment until I realize that I’ve now given different meanings to synonyms, and in so doing have guaranteed myself future confusion.

    So I probably don’t comment as much as I should, but I make a point of bookending my code sessions with it. Initially I comment to layout the plan, then fill in code between comments, then go back to sum up with explanations of decisions I’ve made in the process. That way even when I do something that later seems idiotic, there’s an easy record of what I was thinking at the time.

  19. Piflik says:

    Since the number of errors in any piece of code is always constant, you can only provide a bug-free product by pushing the errors into the comments…another reason for comments…

    1. silver Harloe says:

      I once heard that, “every C program of sufficient complexity contains at least one buggy line, and every C program of sufficient complexity can be reduced in size by one line. So all C programs of sufficient complexity can be reduced to one line that doesn’t work.”

  20. TheZoobler says:

    So I’m someone with an interest in programming that they realized too late, and just about to get my BA in Psychology far too late to switch over to a Computer Sciences major.

    I’m naturally inclined towards math but have only done up to Pre-calculus/trigonometry thus far, and that years ago. Similarly, I took high school courses which taught basic Python and Java, but that was yeaaars ago and I don’t remember much of it now. Just, oh, a conditional loop, I recognize that!

    So basically, I’m starting off at the bottom of the barrel. But I do plan on using the few elective hours I have left to take Calculus and introductory programming courses before I graduate. So…

    TL;DR: What would be the best way to self-educate myself in regards to C++? A specific book? A good online tutorial?

    Is it not even worth bothering until I at least work my way through calculus and an introductory college course?

    1. Julian says:

      You don’t need calculus for programming.

      Go for the intro programming courses, though. Don’t worry specifically about learning C++ until you have the concepts under your belt. Understanding data structures and algorithms is far more important than knowing a particular language. (And C++, which is fairly complex as languages go, is probably not the best choice to learn with.)

    2. Felblood says:

      Calc is far more essential to physics or really intense optimization, like building a search engine or something. You don’t need anything more fancy than that Pre-Calc to get started.

      That Trig will serve you well if you start doing anything with a modern graphics card though. They have triangles for blood, doncha’ know.

      Brush up a little on that Matrix Math they taught you in Pre-Calc, since it will suddenly stop being the most useless thing you ever learned.

      The best way to learn to write code is the best way to learn to write books:

      1. Find a work by an author you respect in the genre you want to write, and read it until you understand the how and *why* of how it works.

      2. Write a short program/story that uses what you learned.

      3. Repeat 1 and 2 until your definition of a short program/story has grown to include something you’re truly proud of.

      4. Write code until you are awesome.

      This can be tricky, since good code is harder to get your hands on than a good book.

    3. Kdansky says:

      I would advice against starting with C++. It’s full of arcane trickery that makes little sense to a beginner. Learn a high-level language first (for example Java, because it has a huge library), then delve into the machine level second. Else you have to start your learning with linker errors that make no sense, finding libraries that you don’t need and learn how to do memory management before you even know how to do the most basic things.

      1. Amarsir says:

        I agree, don’t learn on C++. You’ll be recreating the wheel way too much, and burdening yourself learning syntax details instead of actually learning how to program. I bet Shamus wouldn’t even be using it now except that he (and I) are clearly in the “old dog” stage of life.

        I recommend learning on “Python.” http://www.python.org/ Like C++ it’s free and cross-platform, and there are great online resources. But unlike C++ there’s a lot less baggage.

        Compare it to taking a writing class. C++ starts out by saying that your reports must be typed in a certain font, with certain spacing, how many paragraphs you need, how your cover sheet lays out, where the staple must sit, etc. Python skips all that and says “here’s how you express yourself effectively.” Learn with that and then later if you decide you want C++, it will be much easier to go backwards and add any formality you may have missed.

        And take it from someone with a B.S. in Comp Sci: being motivated to teach yourself is much more fruitful than taking proper CS classes.

        1. nmichaels says:

          Seconded. Python is a great language to learn with.

          Math is very unimportant if you’re not making games or simulations of the real world. If you are, trig and linear algebra will get you an awful lot further than calculus. I’ve only used calc on one programming project, and it was a toy that models orbits. Everything I’ve ever done with graphics has used some trig. Nothing I’ve ever done for work has used anything more advanced than algebra.

          Well, except for the electrical engineering (not programming) bits. Those use some pretty heavy calculus. If you ever want to know how to apply calculus and imaginary numbers and trigonometry to the real world, all at once, read an intermediate EE textbook.

    4. Alan says:

      Don’t sweat calculus; basic algebra will get you through most programming, add in some trigonometry for some simulations and graphics. There is programming that calls for calculus, but if you’re heading in that direction you’ll know well in advance.

      If you’re considering your first language, C++ is… a challenge. It’s language with a pile of non-obvious behavior, largely as a result of needing to retain a high level of compatibility of C. As others have suggested, consider another language, especially a “managed memory” langage. Python is a reasonably good entry point. Java’s not entirely terrible.

      But.

      If the arc you envision for yourself does eventually lead to C, C++, or some other non-managed memory language, there might be value to starting with C. Not C++, C. It turns out that thinking about managing your own memory is really, really non-obvious, and many people never really get it. I don’t know if it’s that managed memory languages drew in people who otherwise wouldn’t have been programmers, or if learning in a managed memory language creates a mindset that’s hard to break out of, but I’ve meet too many programmers who simply can’t manage their own memory. And if you’re working in languages like C or C++ that’s a dealbreaker. Either way, there is a lot of value to finding out sooner than later.

      So why C over C++, given that C++ has a great many features that I love? Because C is a far, far simpler language. With C++ it’s easy to get distracted by the complexity. My first programming class was in C++. I didn’t really get why you’d want object oriented programming. So I said screw it and wrote C for the next year. I found it extremely valuable; I learned to manage memory properly, I learned a lot about program organization, and I gained an appreciation for what C++ offers.

      1. Wedge says:

        ESR, in his guide to becoming a hacker, suggests that any aspiring hacker learn the following languages: Python, C, LISP, Java, and Perl, and I generally agree with his reasoning.
        * Python first, because it’s an excellent language for beginners, but is still a powerful, fully-featured language that will still be useful when you graduate to writing large, complex programs
        * C, to learn pointers and memory management–it’s important to understand what’s going on under the hood even if you don’t have to deal with it yourself most of the time
        * LISP, because functional programming
        * Java and Perl, for practical reasons.
        (I suggest reading the whole article I linked above, it has a lot of very good advice for an aspiring programmer even if you’re not interested in the larger “hacker” culture)

        I wouldn’t go near C++ until you’re comfortable with your skills as a programmer–it’s a very bloated language, with a lot of gotchas that can be extremely confusing even for experienced programmers, and for the extra effort the payoff you get is better performance, which is almost never worth it. You’re not going to be programming AAA games in your first week as a programmer, so save yourself the headache–I’m saying this as someone who learned C++ as his first language, and it was a painful few years before I realized how much dealing with C++’s flaws was holding my learning process back.

        1. nmichaels says:

          Replace Lisp with Scheme (which SICP calls Lisp) in ESR’s list of suggestions, and feel free to ignore Perl. It hasn’t got any semantic language features that aren’t in Python and it’s ugly as sin.

          1. Wedge says:

            Scheme *is* technically a dialect of Lisp–the other major dialect being Common Lisp. I’ve only used Scheme so I can’t comment on whether CLisp is better or worse.

            I would agree on not sweating Perl much–it’s useful for simple tasks, especially those that require heavy text processing, but it scales poorly and can be very difficult to maintain. You won’t learn much doing Perl that you won’t learning Python, and Perl’s primary draw, regexes, are available in every language under the sun these days.

            Which sort of brings me to my point about learning the languages suggested above–learning new languages will teach you different ways to think about programming that are invaluable no matter what language you end up working in. That’s why I suggest Lisp, even though you will likely never write a single line of Lisp professionally in your entire career. So I recommend you learn new languages regularly just for the fun of it!

            Some languages I would suggest in this vein:
            Lua: prototypal inheritance is pretty cool, and very different from how most OOP works
            Prolog: logical programming is even weirder than functional programming, but you can do some really cool stuff easily once you figure it out
            Forth: for imperative, stack-based programming
            Brainfuck: because lol

            1. Alan says:

              “[Perl] scales poorly”

              Define “scales.” In terms of being able to run large systems? Works just fine. If it’s good enough for Slashdot, Bugzilla, Amazon, Craiglist, IMDb and others, it’s probably good enough for you.

              “[Perl] can be very difficult to maintain.”

              So can any program in any language. But in practical terms, it’s akin to C++. Take that how you will. :-)

              “Perl's primary draw, regexes, are available in every language under the sun these days.”

              For me, Perl’s primary draw is the “there’s more than one way to do it” philosophy. It has its downsides, but it means that a Perl solution can more easily map to a problem.

              And saying the regular expressions are available in other languages is glossing over part of Perl’s appeal in the area: Perl’s support is a deep part of the language, allowing you to write succinct and clear code without a pile of boilerplate code just to deal with the regex API. It increases the initial learning time with the tradeoff of speeding up reading and writing for someone with experience.

              1. nmichaels says:

                I think “scales poorly” was intended to say that as the amount of code written in Perl increases, the ease with which it’s wrangled decreases at a rate faster than for alternative languages.

                1. Peter H. Coffin says:

                  Though, this is the case with any language that structurally promotes putting code into a single chunk of file. Which is pretty much any language that runs primarily by means of a parser/interpreter/JIT compiler instead of a formal build mechanism. BASIC is reputed to be a hot mess. perl is reputed to be a hot mess. PHP is reputed to be a hot mess. This serves to both make the language a draw to n00b programmers (meaning both new and unsophisticated) who aren’t used to making rigorous code, AND to enable making incremental changes that seem like nothing at the time a standard development tactic. Since the test/change/test cycle is so short, someone can burn through literally a hundred changes in an hour, and completely lose track of what was actually different version to version, and there’s typically no segregation between “developing”, “testing”, “production”, and zero change control/versioning. You can APPLY those things to “hot mess” languages, but there’s no point at which it happens naturally, because there’s no point at which you say “This portion is fixed”.

          2. Alan says:

            I’m with you on Scheme over Lisp. Although practically you can get away without learning a functional language for a long, long time. Knowing a functional language will absolutely make you a better programmer, even if you never use it again, but you can delay it.

            As for Python over Perl. *sigh* I love Perl. It’s practical, powerful, beautiful and genuinely makes me happy to work in, in a way that Python never has. But while Python never fills me with joy, I must yield that’s a rock solid language. And Perl is, ever so slowly, on its way out. Perl is nowhere near as dead (or as ugly) as it’s detractors claim, but Python is the safer bet. (At least I can console myself that Python survived the Ruby fad.)

            That said, take a look at The Second State of the Onion, which is a sort of keynote by the original creator of Perl. Stop around the point he starts talking about Chinese (unless you’re enjoying yourself, in which by all means carry on.) If the idea that Perl is messy because human languages are messy and that both map well to real world messy problems, you might want to take some time to look into Perl; you might be the sort of person for whom Perl will add happiness to their life. Learning Perl, from O’Reilly and Associates, is a good place to start, and is the best technical book I’ve ever read (although it may assume you’re a programmer to start…).

            1. nmichaels says:

              Alright, Perl has its supporters and I won’t say that your opinions on it as a language are invalid because I never learned enough Perl to make anything significant in it. The better (less flamy) argument is what you said about it being on the way out.

    5. TheZoobler says:

      Just wanted to say: Thanks for all the tips :) I think I will look into all of this and probably start with Python, then. I would love to code games one day, but I’m sure that even if I adopted the most intense self-taught practice and research regimen, that’s still years ahead of me. Still, no time like the present to start.

      1. nmichaels says:

        Don’t wait years! Once you have the basics down (a week? two?) you can start hacking up horrible abominations that look like games. Take a look at pygame for some basic graphics-y things if you want to go that route. If you’re okay with text interfaces, there are tons of gamey opportunities. If you get stuck with programming problems, look at Stack Overflow. It’s got a very helpful community and lots of questions already answered.

  21. Paul Spooner says:

    “curface” is supposed to be “surface” perhaps? Or is that a special kind of geometry?

    Ditto on the comments. I tend to over-comment stuff myself, and it has occasionally bit me later as the code mutated, but that’s just poor practice in general. If the code is complex enough to merit a comment, it’s not likely going to mutate a huge amount (in my experience anyhow).

  22. Chris says:

    The comment on line 11 is necessary because of the conversational nature of your overall comment strategy for the entire method. If you commented the critical stuff only and left out that comment you’d ruin the step-by-step recipe that makes this method palatable. If the rest of the function didn’t look that way the comment would be unnecessary.

    In other words, each comment does not exist in a vacuum. Developers have to be thoughtful about how their code will be percieved in its whole.

  23. ShadowAgent says:

    If I’m working in Render.cpp, then all of the functions declared in Render.h would follow the format of Render_____ ()

    Why not use namespaces?

    1. Neko says:

      I wondered about that too. At my old work, we put helper fns like that in the anonymous namespace of the cc file.

      #include “RenderingThing.h”

      namespace
      {
      void
      render_hud(…)
      {

      }
      }

      void
      RenderingThing::render_scene(…)
      {

      }

  24. Hitchmeister says:

    Do most compilers offer -passiveagressive switch functionality?

    1. Zukhramm says:

      Don’t think so but there is -pedantic.

  25. Steve C says:

    Shouldn’t line 1 be:
    //This calculates surface normals on a mesh with seams.

    1. Shamus says:

      The part I posted is just an excerpt. There’s another whole section that precedes what you see that passes over the model and builds a list of verts with all co-incident points merged.

  26. Winter says:

    “Argument from authority” is, technically, “argument from false authority”. It’s just that, in our modern world, we don’t really recognize “authority”… however, if there is an authority on programming i would say John Carmack qualifies. You could do a lot worse than emulating him…

    1. Dave B. says:

      In English, the word “authority” can be used in more than one way. In one sense, you might say “Because John Carmack has a position of authority at Id Software, we should listen to his views on programming.” This is a very weak argument, because his position is only an indirect measure of his status as a programming expert.

      Or, you could say, “John Carmack is an authority on programming, as he has demonstrated tremendous skill and knowledge in that subject.” This argument is valid, as long as Carmack is an expert on the relevant field. In this case, I think he is.

      The “argument from authority” fallacy comes from people mistaking the former situation for the latter.

      1. WJS says:

        No it isn’t. It’s a fallacy because it presumes that Carmack isn’t just really good, or even a genius, but literally infallible, so anything he says must be true. I’m sure that you can see the problem with that.
        (Hint: he’s the guy who said “I’m sure a thousand characters will always be enough for the GL extension string”, which he has seriously regretted since. Geniuses can fuck up just like regular people)

  27. Goggalor says:

    Your public/private Render____() functions looks like a holdover from C to me.

    The ‘Render’ prefix on public functions is working like a namespace and the ‘static’ on local functions seems to me a natural way to make private methods. I think it’s better even for C++ as it means you don’t clutter up your class definitions with private methods the caller shouldn’t need to see.

    Also the Visual C++ class browser won’t show local static methods, so it simplifies the interface when browsing the code -you see what the caller has access to.

    I’ve just started learning Objective-C, which doesn’t have public/private methods or namespaces, and it’s surprising how quickly my coding style evolved to match the version you describe here.

    1. Shamus says:

      This sounds very likely. The codebase began as C back in 1994-ish, and we started migrating to C++ in… 1998? Something like that.

  28. Kdansky says:

    I disagree with a significant number of your programming arguments in most of your posts. This one, on the other hand, is absolutely correct. Small functions are nice, but not always worth it, pointless comments should be deleted and useful comments are a must.

  29. Mersadeon says:

    I agree on the “comment it!” thing. I’m part of a three-man group that has to do its homework together, and commenting your stuff just makes it SO much easier for everyone else to follow your train of thought. And in two weeks, when the tutor asks you to explain your program to everyone else, you yourself will be thankful for your own comments.
    Edit: Also, we are learning Haskell right now, and with code that brief, a comment really doesn’t hurt anyone. Seriously, depending on what you use, you don’t even have to mark comments with a sign, but instead you mark every line of code. (Although I personally still prefer marking the comments instead of the code)

  30. Atle says:

    I like #11:

    1. It makes it possible to read the whole flow in human language just by reading the comments. If some parts have comments and others do not, I have to switch back and forth between code and comments. In short: It’s consistent.

    2. It makes it possible to see if block of code and intention of code matches. You can’t see that by reading the code alone.

    I like the style. I put separate parts of functions in small blocks. I don’t like functions for every small code block, for the reasons already described in the post. Too much variable passing and jumping around in the code. (Variables passed around can be put in a struct, though.)

  31. Loonyyy says:

    I find it interesting that the original analysis came down as decidedly anti-comment, and felt that lots of comments was a younger person thing. While he had a good example of how not to comment (Three comments telling you something while was obvious about two lines of code), he comes down as quite anti-comment, with this delightful bit of stupidity:

    “For example, what would happen if someone changed the function and removed the const at the end? Then the surface *COULD* be changed from within the function and now the comment is out of sync with the code.”

    *Gasp*. You mean you have to update documentation when you update code? SHOCKING! Whenever you update code, you have to do that. If you leave only the commenting you had originally, of course it’s going to go wrong. Similarly, if you change the purpose of a section of code, and then use the same names for variables and funtions, it’s going to be bad. This seems to indicate precisely the school of programmer thought he lies in: He doesn’t care about the comments he writes, and he doesn’t think about them, or making them better. Ironically, these types tend to write the comments he so decried when they’re made to comment.

    “Extraneous comments hurt the readability and accuracy of code thus making the code uglier.”

    They don’t hurt the accuracy.

    As for the readability, some environments will let you tab open and closed paragraph comments /*-types. If you can’t read commented code, that’s what you should do. But reading a 2 line description of a wall of math obfuscated by code before you try to break down the code only improves readability.

    It makes it infinitely easier to understand someone elses work, and that someone could be you in a few months. It also serves as a more detailed version of pseudocode. Read the comments in the headers, and you’ll have an idea of how the whole thing fits together.

  32. BlackBloc says:

    ­­­­­­>>>Function names start with an upper case, and in multi-word function names each word starts with an upper case

    Second part is awesome. First part is clearly showing that Carmack is a C programmer. In most companies I worked at where we do object oriented programming, starting with an upper case is for classes, not functions.

    ClassName
    functionName()

    1. Shamus says:

      You’re right. Carmack did C for a LONG time. He didn’t move to C++ until long after a lot of other people had made the jump.

      http://www.phoronix.com/scan.php?page=news_item&px=MTI3NDQ

      Looks like Doom 3 was his first C++ game. That’s 2004.

  33. WJS says:

    I’d be interested to hear anyone else coming through here’s opinions on comment indentation. Should they be on the same level as the code (binding them tighter to the code they belong to), or not (they stand out more if you’re just skimming comments)?

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Shamus Cancel reply

Your email address will not be published.