A new programming language for games, Annotated: Part 2

By Shamus Posted Friday Feb 27, 2015

Filed under: Programming 110 comments

The annotation continues…


Link (YouTube)

6:00 Garbage collection.

In C++, you have manual memory management. You need to request enough memory to hold all your space marines. If you need more marinesWho doesn’t? you have to allocate more memory. When you’re done with a marine, you have to tell the system you don’t need that chunk of memory anymore. You can’t ever make a mistake, because using memory you’ve freed causes a crash. Using memory you haven’t yet obtained causes a crash. Trying to free the same bit of memory more than once causes a crash. And forgetting to free memory causes a memory leak where your program will consume more and more until it crashes.

And by “crash” I mean, “It might crash or it might malfunction badly somewhere later and you’ll have NO IDEA where it all went wrong.”

Garbage collection is where the language itself manages your memory for you. When you’re done with a space marine, you just forget about him. At some unknown point in the future, the garbage collector will run. It will eat some processor cycles and figure out what bits of memory you’ve allocated but are no longer using, and it will free them for you. But you don’t control when GC runs, and it has some performance cost. So, right in the middle of your high-performance game you might have this random process kick off at an inopportune moment.

This is the stuff of flame wars. I’ve seen people claim that garbage collection is terrible and slow, and I’ve seen people claim that it’s super-fast and complaining about GC performance problems is merely ignorance and superstition. I strongly suspect it depends on what you’re trying to do and how your program uses memory.

I don’t know personally. The only GC language I’ve used is Java, and I wasn’t doing anything resource-heavy at the time.

I’ve often wondered how feasible it would be to offer a language with garbage collection, but have the GC only run when you allow it. Something like, “Here’s five milliseconds”, do as much as you can in that time.”

I don’t know enough about how garbage collection works under the hood to know if that even makes sense, but I think it would be a really attractive compromise between “manage memory yourself” and “have the program devour an unknown number of CPU cycles at an unknown point in time”.

The game Continue?9876543210 is all about a videogame character who is no longer in use and is trying to escape the garbage collector. I didn’t play it, but Chris did:


Link (YouTube)

13:00 I think making a compiler is easier than making a AAA game.

A compiler is a program that will take your code…

1
2
3
4
5
100 REM BASIC WAS A FANTASTIC EARLY LANGUAGE.
110 PRINT "Shamus Young's BASIC program of text-spewing."
120 PRINT "For best results, direct the output to a printer"
130 PRINT "that belongs to someone else. Screw trees!"
140 GOTO 100

…and turn it into an executable that anyone can run. It’s a program that takes source code as input and other programs as output. Blow is suggesting that making a AAA program is much harder than making a program that makes other programs.

On one hand, this sounds sort of audacious. On the other hand: Given the amazing number of programmer-hours that go into these games, it stands to reason that they would have to be more time-consuming than a compiler. In the old days, compilers were often one-person projects.

Having said that, I suspect (with no experience to back it up) that the trick of making a good compiler isn’t in getting it to work, but getting it to work well. It’s not that you need a lot of hours, it’s that you need lots of iteration. You release it, people use it, find problems or ambiguities, you make improvements. Repeat. Forever.

I don’t feel any big need to make my own language, but I’ve often thought it would be a very good self-educational project to make a compiler.

17:00 Joy of programming.

Blow makes the case that a good language will be fun to use and that we’ll be more productive simply because there will be fewer days when you get demoralized by confusion. Writers get writer’s block. Programmers get daunted by complexity and the dread of untangling something incomprehensible.

And I think he’s basically right: There really are “quality of life” issues when it comes to dealing with difficult code and unsatisfying solutions. Can a new language fix this? Making a new language to make programming more comfortable is like a writer that stops working on his book to remodel the room where he does his writing, in hopes that the new setting will make him feel “more motivated”. It’s probably not actually worth the trade-off in terms of time, but it can sound really appealing if you find yourself stuckAnd if you ARE stuck, it’s probably a good way to fill the time.. And I often find myself stuck.

He says this new language should be “designed for good programmers”, which touches on a long-standing debate we have in language design. Programmers make mistakes. Most of our mistakes fall into a very small number of different categories. The same mistakes pop up again and again, and so sometimes languages are designed to protect us from making those blunders. But protection is often restrictive in some way. I don’t care if you’re talking about body armor, training wheels, airport security, speed limits, or bounds checking on arrays, when you’re being protected you’re usually also being hindered in some way. This doesn’t mean that protection is bad. It just means that in some cases you might prefer to live dangerously and go without the protection. We wouldn’t want to replace all scissors with safety scissors, and in the world of videogame programming, we might not want to hinder our good coders to help our bad (or inexperienced) ones.

24:00 Productivity

There’s a bit of an interruption in the presentation here when Blow realizes he labeled the graph wrong and he feels the need to fix it. (Which I totally understand.) But I don’t want that to distract from the point he’s making, which is something too many young programmers don’t grasp. Heck, I think old programmers have trouble with this concept. The problem he’s talking about is that program complexity takes a massive toll on productivity. In the first two days of a project you’ll feel like a miracle worker who can do anything, and in the last two days of the project as the codebase sails over the event horizon of human understanding, you’ll struggle to change even simple things without creating a dozen bugs. It’s the reason so many projects ship lateThis is why programmers have the adage, “The first 90% of the project takes 90% of the time, and the last 10% of the project takes the other 90% of the time.. We tend to make our time estimates in those early days before things get out of hand.

This is based entirely on my own gut feeling, but I have the sense that if you could actually plot the complexity / cost dynamic on a graph for real, you’d see the cost rise on an exponential curve. As you approach the limits of the programmers ability, the time cost shoots up drastically. If this is true (and not just my own bitterness over too many projects that got away from me) then any change that can bring the complexity down a couple of notches could have a huge payoff.

Again, I’m not endorsing his languageI haven’t even caught up on his later presentations to know what his language is like.. I’m just saying that runaway complexity is a real problem and that the payoff might be bigger than it seems. A 5% reduction in complexity might result in more than 5% reduction in cost.

 

Footnotes:

[1] Who doesn’t?

[2] And if you ARE stuck, it’s probably a good way to fill the time.

[3] This is why programmers have the adage, “The first 90% of the project takes 90% of the time, and the last 10% of the project takes the other 90% of the time.

[4] I haven’t even caught up on his later presentations to know what his language is like.



From The Archives:
 

110 thoughts on “A new programming language for games, Annotated: Part 2

  1. EwgB says:

    Oh boy, oh boy! Another programming post! Yay, we all get to be insufferably pedantic again! :-D

    1. Daemian Lucifer says:

      Some of us get to be insufferably pedantic

      ;)

      1. Cybron says:

        ACKTSHUALLY, it would be more accurate to say we all have the opportunity to be horribly pedantic, and that only some of us will take advantage of this opportunity.

        1. Daemian Lucifer says:

          Allow me to be pedantic once more and point out that it would be even more correct to say that we will all have the opportunity to be horribly pedantic,and only some of us will attempt to take advantage of this opportunity,but some of those will not succeed.

          1. Purple Library Guy says:

            For instance, when it comes to on-topic posting about programming, I for one do not have the opportunity to be pedantic because I know very little about the subject.

            On the other hand, technically the comments on this article, as on all the articles, give nearly all of us the opportunity to be horribly and/or insufferably pedantic, because Shamus is pretty good about tolerating off-topic posts and nearly everybody has the knowledge to be successfully pedantic about something. Certainly nearly everybody who reads twentysided.

            1. MichaelGC says:

              twentysidedtale, strictly.

              1. WJS says:

                The URI says twentysidedtale. The banner at the top of the page and the page title both say Twenty Sided.

    2. silver Harloe says:

      BE pedantic? I believe it’s more correct to say that we get to attempt to express our pedantry.

  2. Ingvar M says:

    In some GC languages, you can have “non-GC sections”, where the GC doesn’t happen. Rather than having sections where they can. And that’s because you’re probably better at saying “this tiny section here really needs to be fast” than you are at saying “these pieces here can be slow”. Not better at being able to predict, just more likely to stick those annotations into the source code.

    1. The Snide Sniper says:

      For a video game, it’s not a matter of “this tiny section needs to be fast”. A video game has some minimum frame rate that it needs to maintain. If the game logic is decoupled from the graphics, there’s a minimum game-logic update rate that needs to be maintained. What this means, in practical terms, is that the game has a small amount of time (33ms for 30 FPS, 16ms for 60 FPS, 8ms for 120FPS) in which it must perform all graphics code (and all gameplay code, unless decoupled).

      Additionally, there’s only so far the game can “think ahead”, as it were, because the game needs to know what the player’s input is. The amount of decoupling between graphics and gameplay (along with other things, such as triple-buffering) determines exactly how much thinking ahead can be done.

      This leads to a situation where you have a tiny amount of time in which you must do something, but then have nothing to do afterward. If you know the maximum permitted framerate, have a sufficiently high-resolution timer, and finish work early, you can even figure out exactly how much time remains for less time-critical tasks such as garbage collection.

      To summarize, a video game will tend to have a set of massive sections that need to be fast, and a few easily-determined periods of time when it can give up control to the garbage collector.

  3. Zak McKracken says:

    Me, as a Python convert, I’m thinking that at least half of all garbage collection could be done during compiling, using passive code analysis:

    Here’s a variable, it was defined in the header of this function, and this function is done running so we’ll insert something here to free the memory.

    Really, it should by default throw everything away that was defined inside the function but is not returned by it. If you are using something in one place that was defined in (but not returned by) a function that has finished … I think there’s already a flaw in your program.

    But then, if you’re juggling big things (textures, geometry, videos, finite element matrices) in and out of your storage (which I’m never doing in Python, there are C libraries to handle that sort of thing), I think it does make sense to do it manually. Which simply means not waiting for the end of the routine but doing it earlier by hand.

    … so… are there still scenarios left where that sort of thing would leave memory leaks that would need to be garbage-collected?
    (seriously, I’m not sure, I just know my own small corner of the biug programming world…)

    1. psivamp says:

      Your C-derivative languages automatically allocate and free local variables — anything not featuring a new basically. As a python programmer who came into after learning Pascal and 3 C-derivative languages, I can say that python’s scope-less architecture is some sort of weird black magic to me.

      What you can accomplish with local variables that get destroyed and freed at the end of their scope is somewhat limited. If you want a large amount of space marines without dynamic allocation, then at compile time you have to specify how big that number can get at max. The compiler will make a binary that when run, will hog enough space for that maximum amount, of everything that you might want to have at a time. And you’re still left essentially doing manual memory management because you have to keep track of which indices in the array are active, or you have to pack the array when you remove a guy and keep track of how many total guys there are.

      1. Zak McKracken says:

        I started with C64 BASIC, Pascal, then a very little plain C, Fortran, so my C-like knowledge is thin (and I only once ever used dynamic memory allocation)…

        Hmm… the way I’d imagined this to work was to allocate memory for new space marines as they pop in and remove them when they get killed (and replaced by corpses…)

        … on third thought, you wouldn’t want to have objects SpaceMarine1, SpaceMarine2 and so on, you’d want to have a list/array SpaceMarines that holds all of them, so if one was removed the whole thing would still stay in scope, so … yeah, the thing I was talking about would work for the type of stuff I’m doing but in a game it’d me complicated.

        Re Pythons method: The interpreter does have a “scope” of some sort, and my guess is that it only removes what is safe to remove (i.e. local variables after the local context is done, that sort of thing), and accepts the risk of keeping some data long after it’s become redundant. Also, I guess being able to look at the source even while the code is running does give you an advantage in these things.

        Still, after something has finished running, there’ll usually be some variables around in a Python shell even though they shouldn’t. I’ve had a case where you could not run the same script twice in the same console because the residue from the last run would cause some strange error. (Using a tk GUI)

        1. WJS says:

          You can allocate memory for your new marines as they are spawned, but that’s a recipe for cache misses, which murder performance.

    2. Ingvar M says:

      You have something like:

      def foo(*some_args):
      blah = compute(some_args)
      return bar(blah)

      def bar(stuffs):
      handle = RegisterSomething(stuffs)
      return handle.InterestingValues()

      In this case, you cannot actually (safely) return blah, without a whole-program analysis. Because we don’t know (in foo) if blah (let’s assume it’s a dict or an array here) is copied or referenced in bar.

      In bar, we also don’t know if stuffs (which is, in this case, blah) is referenced, or kept, without looking in the RegisterSomething function (or class creator or whatever it is).

      If that’s, in turn, in an externally-compiled module (unlikely in Python, but bear with me here…), we fundamentally cannot say if we can or can’t, release blah once foo has returned.

      1. Zak McKracken says:

        I think what happens in Python is that it somehow keeps track of pointers pointing to objects, then not delete the object at the end of a routine but only the pointer; the object only goes if there are no pointers left pointing to it.

        This, of course, would likely require a little more background stuff than what you would like to have in high-performance code.

        Then again, I think that it should be possible to design the syntax of a programming language in a way that makes it obvious what’s still in scope and what isn’t. The allocation thing in many languages works nicely via the variable definition, and I wouldn’t be surprised if somebody came up with a way of doing a similar thing for the inverse that is human-readable and implicitly does the same things you need to do these days to free memory in C++.

        1. Ingvar M says:

          Yes, Python uses (I think) reference counting GC, with occasional cycle-checking GC (it’s been many years since I last checked and I would not be surprised if it’s now always-full-checking all the time).

          The problem is that there’s quite a few things you want to be able to do, which effectively results in an indeterminate life-time for a newly-created object, in a non-local part of the code.

          C++ deals with this through explicit allocations, indirection layers (unique_ptr et al) and lots of manual reasoning about ownership and lifetime.

          In a “fully GC:ed” language, those things may still be useful (the GC equivalent of a “memory leak” is “unintentional liveness”), but they’re much less necessary and makes a few cases much easier to implement safely.

    3. Bloodsquirrel says:

      C and C++ already do this if you’re allocating things on the stack. But sometimes objects need to live on past the function that created them, so you create them on the heap, and afterwards you don’t know how many pointers point to that same chunk of memory without doing actual reference counting.

      1. Ingvar M says:

        And even so, reference counting doesn’t (fully) help. Sometimes, circularity in a data structure is useful.

        Imagine, for example, that you have a data structure representing a building, with each room an object and each door from one room to another as a reference. You can’t sensibly do that without introducing loops. You could introduce the doors as objects in their own right and have strong references from (say) rooms to doors and only weak (uncounted, essentially) references from doors to rooms. But that requires having another thing that keeps “rooms with a single door” alive, since they would otherwise be de-allocated for having “no references”. And, well, pain.

        1. Mephane says:

          Well, I would design such a data structure hierarchically, so that the house has the references to all the things and knowledge about which room is connected to which other rooms, and the rooms themselves would be relatively dumb objects without any knowledge about the other rooms or the building itself.

          1. guy says:

            That makes your runtime complexity for going to adjacent rooms higher, I would expect. It’d certainly complicate writing standard graph traversal algorithms and might up the total memory requirements.

            1. Veylon says:

              Nah. The House holds all the strong pointers so that when it goes, it takes all the rooms (and doors) with it. The weak pointers would be used for the links.

              Granted, there would still be some additional cost, as weak pointers need to be checked for validity before they can be dereferenced.

        2. Veylon says:

          The newer versions of C++ have “strong” and “weak” pointers for situations like this. The only references counted are those of the “strong” pointers.

    4. Retsam says:

      This is, as I understand, essentially how Rust works. It uses an “ownership” system to determine who can access memory and when that memory can be cleaned up. It has rules for what part of the program “owns” a piece of memory, and for other parts of the program to mutate or access that memory, they need to be transferred ownership or need to “borrow” the memory.

      So beyond the simple case where a function allocates memory then deallocates it when its done, you can have scenarios like: “foo” allocate memory, then pass ownership of that memory to “bar”, which loans the memory to “baz”; when “baz” finishes the ownership reverts to “bar” and then “bar” finishes and the memory is freed.

      Any accesses of the memory that violate the ownership system will cause the program to fail to compile.

    5. Noah says:

      This would absolutely leave memory leaks needing to be garbage collected. You’re basically talking about using stack allocation or something like it for function-local storage. Which is a great idea, but anything that leaves the function (or *might* leave the function) must be tracked as normal, and figuring out where it goes ahead of time is an NP-complete problem. Reduces immediately to the halting problem, since you can quickly make a halting problem out of “if this memory is unreferenced, then…”

    6. nm says:

      ohno = []
      def bar(fun):
      global ohno
      ohno.append(fun)

      def foo():
      lst = [12]
      bar(lst)

      if __name__ == ‘__main__’:
      foo()
      print(ohno)

      Dynamic languages are messy. Assume I indented properly, this comment thing doesn’t like whitespace. (disclaimer: I heart Python)

      That doesn’t even scratch the surface. What if I rebind bar mid-program?

      1. Zak McKracken says:

        Yes, global variables would definitely be a problem for this …
        (and likely are to Python as well)

        Essentially, any variable in a hypothetical language that does neither spend lots of ressources on garbage collection nor require handling everything manually could not allow global variables — or those globals would not be cleaned up.

        That would have to go for object instances, too … so, really, this whole thing could only work for local variables/objects which are defined within a function that terminates within a reasonable time of the object ceasing to be useful, i.e. very much not the game’s mainloop… that suddenly looks a lot less than trivial.

  4. psivamp says:

    I’ve been reading Game Programming Design Patterns recently. It’s a pretty good read.

    He says that writing a simple bytecode compiler and run-time for a task inside your game (AI or a spell system were his examples) isn’t as hard as it might seem at first. Fascinating idea.

    I definitely think that Blow is underestimating the extent of the effort that goes into writing a compiler. Writing a simple compiler that turns code verbatim into instructions might not be hard, but writing one that can unroll recursion into iteration to avoid unnecessary stack allocations or reorder statements to take advantage of hyperthreading optimizations is a different matter. One of the reasons that people stay with C++ is that we have really mature compilers for it.

    Hyperthreading (as explained to me) allows a processor to read the next instruction into the pipeline while performing the computation for the current operation. The catch is that you can’t use the same variables in concurrent ops (this is where I might be wrong). So, those nice closely related statements we write where we work with a variable until it’s done and it all makes logical sense to group together have to get pulled apart so the processor can use its time more effectively.

    Another point about writing compilers is that you have to write one for each processor architecture. Right now, most people run on the same basic architecture (x86). But ARM is making strides forward. I’m writing this on a Tegra K1-powered Chromebook running linux. I have a graphics chip with the same soft/firm/hard-ware interface as the desktop chips. ARM is fairly different from x86. You can do conditional instructions (instead of doing a comparison, then jumping if it failed or executing the block and jumping at the end; you can do a comparison, perform an add op if it failed then do a subtract op if it succeeded). Different register layout, ARM supposedly lacks an integer division operation.

    1. Ingvar M says:

      You can’t read from a variable that has been written to “recently” without stalling the pipeline. Or, worse, getting a non-deterministic result. You can read from the same variable in multiple subsequent instructions as long as none of them changes it.

    2. Abnaxis says:

      Aren’t you also writing for different operating systems, as well as processor architectures? Does OS process scheduling or hardware API factor into compiler design? Or am I completely off base, since the entire point of an OS is to just handle all this crap so the individual compilers don’t have to?

      It seems like the real challenge comes from a lot of small factors that multiply complexity. Kind of like those “Thousands of possible characters!” claims that come from a player making three or four choices that have 10 options per choice.

      1. Ingvar M says:

        The compiler produces (in the extreme case) runnable machine code. So it definitely needs to adapt to the actual hardware architecture it runs on. OS interfaces are commonly (but not always) isolated to pre-written libraries.

        It also needs to know how to call the OS, unless that’s been isolated into a library. And unless said libraries are written in assembler, you still need to worry about it, to some extent, in the compiler.

        Unless you compile to a virtual machine of some sort (C#, Python, Java, …), where you’re (potentially) sacrificing speed for compiler convenience. But it also allows you to do things like usage-based machine code generation, which is hard to do from first principles, but easy to do once you’ve observed the running code.

    3. Kian says:

      I wanted to remark on this too. Making a straightforward compiler is easy. Making an optimizing compiler is a never-ending task. Consider Microsoft’s Visual C compiler, which is I think over ten years old. Or gcc. Or clang, the youngest of the bunch. These compilers have more work put into them than any AAA game, short of WOW or some similarly long-lived game.

      His best bet might be to make a front-end for LLVM (the back-end that powers clang, if I’m not mistaken), so that he just needs to make a parser that converts source into an abstract syntax tree (AST, the representation in memory of the meaning of your program’s source), and let all the work already poured into LLVM optimize his program for him.

      1. psivamp says:

        LLVM looks neat.

        There’s actually a tutorial for writing a language that uses LLVM as a back-end:
        http://llvm.org/docs/tutorial/

      2. Peter H. Coffin says:

        Consider Microsoft's Visual C compiler, which is I think over ten years old. Or gcc. Or clang, the youngest of the bunch. These compilers have more work put into them than any AAA game, short of WOW or some similarly long-lived game.

        The first release named Visual C++ was in early 1993, and it was 16-bit. That compiler’s old enough to drink now. The stuff it was based on from MS stretched out like a decade prior too, pretty much right to the beginning of Windows, and c++ for that matter. Straight-up c is, I think, a decade older than that.

        1. Purple Library Guy says:

          1993, “Old enough to drink now”. Funny how that sounds so different depending on whether you’re thinking about whiskey or orange juice.

      3. Groboclown says:

        Doesn’t Blow even recommend using LLVM?

        On the point of making a good optimizing compiler, there’s been many a Ph.D thesis built around adding a new optimization to gcc.

        1. lethal_guitar says:

          Yeah he is planning to convert his compiler to LLVM at some point. He started looking into it, but then decided to work more on his language’s features first.

          At the moment, his compiler compiles to C, which is then compiled with a normal C compiler – including all the optimizations.

          Also, you shouldn’t forget that clang, gcc etc. are also complex because C++ is a hella complex language. Undecidable grammar, turing-complete template type system.. you get the idea.

    4. nm says:

      Who needs to write new optimizers these days? Just write front ends for llvm and gcc and you’re good to go.

    5. kdansky says:

      > He says that writing a simple bytecode compiler and run-time for a task inside your game (AI or a spell system were his examples) isn't as hard as it might seem at first. Fascinating idea.

      I’ve done that recently. It was an afternoon-project, much to my own astonishment. The reason for that is that you really don’t need many complex features, if any at all, because you can limit your bytecode language to your problem domain very drastically. My own bytecode doesn’t support loops, and only very specific “if” clauses, and only has one type (int), and it gets parsed during game load from the description strings of items. I just have to be pedantic when writing item descriptions.

    6. “Hyperthreading” is Intel’s marketing speak for “Simultaneous Multithreading”. Basically, the processor runs two threads as once (it pretends to be two cores to the OS) and when one of them can’t progress (for example, its waiting for memory) it switches over to the other (it also does this periodically so neither thread starves because the other thread isn’t getting stuck)

      The first processors executed one instruction to completion, and then executed the next instruction to completion, etc. (~8088-386, 68000)

      Next processors became pipelined. They fetch one instruction while decoding the one before it, while executing the one before that (etc, etc). Sometimes dependencies between instructions mean that it can’t actually overlap the execution of a pair of instructions, so they hold back the instruction for a while until the one it depends upon is done. This is where you need to separate instructions which use a value from the ones which generate them where possible. (~486, ARM7, ARM Cortex A5)

      In the further pursuit of performance processors became superscalar. This means that in addition to the overlapping of a pipelined processor, they execute multiple instructions at once, as long as there are no dependencies between them. (~Pentium, ARM9, ARM Cortex A7 & A53)

      Finally, the big one is out of order execution. In this the processor re-orders your instructions, so that it can see that e.g. 5 instructions after a memory access that is having to go all the way out to RAM (easily ~100+ cycles) doesn’t depend upon the memory access at all, so it can go ahead and execute it. Essentially, the processor is dynamically re-ordering your program’s code in order to react to the runtime, dynamic state of the system that a compiler can’t predict (especially if it doesn’t know your system!). (~Pentium Pro, ARM Cortex A9/A15/A17/A57/A72)

      All of this in the pursuit of higher instructions per cycle. Of course, the more complicated your processor… the more power it burns.

      1. Shamus says:

        That was really educational. Thanks!

    7. J. Random Lurker says:

      I happen to know that some AAA studios invent their own languages and write their own compilers; but the output of these bespoke compilers is C++ code, which is then fed to the existing C++ pipeline.

      So one can get extra language features as desired by M. Blow and still benefit from the mature C++ compilers and tools. Best of both worlds, fraction of the cost.

      It is also really useful to generate “boilerplate” code for your classes/data types. For example, the code that saves an object to disk and the code that reads is really tedious to write, but easy to automatically generate if you have your own compiler.

  5. Daemian Lucifer says:

    Making a new language to make programming more comfortable is like a writer that stops working on his book to remodel the room where he does his writing

    Actually,it would be like a writer that stops writing his book on a typewriter in order to build himself a computer.

    1. Ingvar M says:

      Or like a writer who stops writing computer science textbooks in order to spend a decade or two writing a better typesetting system.

      1. Zak McKracken says:

        +1
        …and becomes famous for it, and has tens of thousands of academic writers forever in his debt.

        Very much that

        1. nm says:

          I really wish he’d finish those books though. Volume 7 is on compiler techniques!

    2. Alexander The 1st says:

      Actually, it would be like a writer who stops writing his book on his typewriter in order to build a conlang to adjust his typewriter to produce the required accents or similar if needed, and then writes in that.

  6. Daemian Lucifer says:

    I think making a compiler is easier than making a AAA game.

    Technically this is true.With a compiler,you dont need to bother with stuff like graphics,sound,actors,story….And you arent restricted by the compiler since that is what you are making,so instead of workarounds distracting you from your main objective,you implement changes that actually are your main objective.

    1. Purple Library Guy says:

      Certainly I think we’d be safe in saying that you need fewer voice actors for a compiler than for an AAA game, and the art assets are much easier to manage.

      1. Trix2000 says:

        What, your compiler doesn’t shout at you when you have errors in your code? :)

      2. Alexander The 1st says:

        This is only a cost thing, though. It’d be cheaper, but not necessarily easier.

        Though as Daemian mentioned above, not having to work around the arbitrary requirements of your previous compiler would be beneficial.

      3. Zak McKracken says:

        …though that may be exactly what modern compilers are missing!

        Cinematic error messages!

        Expensive pre-rendered cutscenes for compiler warnings!

        “DIAS”-type debugging environment — bet you’ll pay more attention if you have only three attempts at finding the bug, or else the bug will eat your code and you have to start over (don’t worry, though, there are checkpoint saves).

        And of course the grand finale if the compiler succeeds; you get different endings depending on the number and type of warnings, and the amount of code.

  7. arron says:

    The problem with garbage collection is that you can’t usually stop it happening during a critical piece of the code, especially on certain mobile devices.

    I was reminded of this Google I/O presentation by Chris Pruett with some of the issues he found programming Replica Island. Having any allocation or deallocation happen in the main game loop causing serious performance issues as the GC would start up to reclaim that memory.. To avoid this, he put a piece of code in his object base class (AllocationGuard) that immediately warned him if any object changed it’s memory requirements. It would have been easier if you could have turned off the GC when the main game was running and have it working between levels when objects are unloaded from memory and the new level is being loaded.

    https://www.youtube.com/watch?v=U4Bk5rmIpic

  8. James Schend says:

    I watched this presentation, and I feel like the language he wants already exists… it’s C#. He rejected it out-of-hand because it has garbage collection 100% of the time and, I honestly think, he’s probably never used it and is ignorant of how pleasant it is to work in. Because goddamned, compared to C and C++ it’s like heaven. (Or as my college buddy puts it, after working in C, C++ and Java, using C# feels like cheating because it’s too easy.)

    It’s performant, it’s cross-platform, it’s already supported (in its current garbage-collected form) on a ton of different gaming hardware and mobile devices, it already has really good graphics/physics/sound libraries.

    I think you could easily meet most/all of his requirements by adding a garbage collection mode to C# where you can do what Shamus proposed: “hey you got 4 milliseconds, do all the garbage collection you want, but don’t take any longer than that.” I have no idea how hard it would be to develop a memory mode like that.

    BTW the grumpy stop-sign icon this forum automatically picked for me probably makes me look like an asshole every time I post. :)

    1. CJ Kerr says:

      If you didn’t already know, you can change the avatar by setting up a Gravatar associated with the email address you use when posting here.

      1. James Schend says:

        I know, but Gravatar broadcasts your identity all over the web and I’d rather not do that if I can avoid it.

    2. houser2112 says:

      I haven’t watched the video, so I don’t know if “because Microsoft” is the reason he rejected it out-of-hand, but that would be a good reason if he did.

      1. Retsam says:

        Eugh, no. Can we stop pretending that Microsoft is this universally terrible company and actually take what it does by their own merits? Yes, yes, the anti-Microsoft/IE bandwagon was maybe somewhat reasonable when there was an actual lack of alternatives… but in a world of OSX, Ubuntu, Chrome, and Firefox, the “I hate X because Microsoft” is just a little bit silly.

        I only used C# for a couple months, but it is a pretty slick language. It’s essentially “Java done right”, where it feels like the type system is actually working for you, not against you.

        1. Purple Library Guy says:

          Well, we could stop pretending that, but since they kind of are it would represent a dangerous deliberate unawareness of the world.

          1. Kian says:

            The thing about Microsoft is that they’re a HUGE company, and not all of the company acts the same. Yes, on the business side they do try to leverage their position, sometimes reaching into illegal territory (and getting fined when they get caught). They’re not “nice”. On the technical aspect, they’re pretty solid. They do refuse to conform to standards, and try to push their own proprietary solutions. Also, they often get blamed for mistakes done by others. If a poorly developed driver crashes your system, Windows gets the blame. Meanwhile, that piece of hardware might not even have a driver for linux.

            When it comes to developer support, however, they’re one of the best. They might not respect the C++ standard very closely, which makes porting between linux and windows kind of a pain sometimes (since code that passes msvc might not pass gcc), but their tooling for Windows is very good. I don’t personally use C#, but everyone I know that has used it and Java agrees that C# is superior.

            So while one might disagree with the company’s decisions, looking down on their tech is a mistake.

            1. Abnaxis says:

              The thing is, I don’t see the “nice” and “not nice” as being so disconnected. Sure, Microsoft has good developer support–otherwise, how else could they make it so end-users have fewer choices, because developers’ time is better spent only developing for Windows?

              Every time I get the bug to start my own project I ask myself: do I want to do this the easy way and, in my own small capacity, foist the burden onto my prospective end-users, who have no choice but shackle themselves to Microsoft if they want my software? Or do I do it the hard way, and try to make my software platform agnostic?

              This dilemma has killed my passion for at least as many side projects as any other issue I can think of.

              1. James Schend says:

                C#/.net already is 100% platform agnostic. Nobody using C# is “shackled” to a particular OS.

                The only namespaces that are specific to Windows are the ones that start with Microsoft.X instead of System.X, and there are very few of those.

        2. nm says:

          As long as C# is controlled by Microsoft, platforms other than Windows will be second class. Sure, they open sourced aspects of the CLI and it’s been ported to (or reimplemented on) all sorts of platforms, but they still use it to lock games vendors to their platform.

      2. James Schend says:

        I don’t understand the irrational Microsoft hate. Maybe a decade ago, but it’s 2015.

        C#/.net is entirely open-specification (thus the Mono implementation on Linux and OS X) and soon the compiler will be entirely open source. Visual Studio Community Edition is completely free, without even the (minor) limitations of the Express edition. Everybody using it gets completely free source control (using TFS *or* Git).

        Exactly what *does* Microsoft have to do before you’ll consider the language on its own merits? What more could they possibly do?

        1. Daemian Lucifer says:

          I don't understand the irrational Microsoft hate. Maybe a decade ago, but it's 2015.

          Windows vista,windows 8 and xbox with all of its related crap.

          1. Shamus says:

            Also Internet Explorer, which is an ongoing source of torment for web developers. The “default” browser for non-technical types is the most quirky, standards-breaking, slow, shockingly insecure, and unpredictable of the bunch. It’s a browser supported entirely by its status as “already installed on new computers”. If not for that, it would have died over a decade ago.

    3. Groboclown says:

      I haven’t looked at C# in a long while. However, back then it supported the “unsafe” mode, where you have direct pointer access, so it ends up looking like you’re writing straight C again. So it gives (gave?) you the ability to say, “in this section, screw safety because I know what I’m doing – do this super speedy but questionable code instead.”

      I also don’t know the state of the Mono community anymore. One side effect of going with C# was the .Net bindings – they used to lock you into a Windows platform; I don’t know how well it could be ported to the Steam platform, or the consoles.

    4. kdansky says:

      D also covers all his issues, including optional garbage collection.

      He discards it out of pure ignorance.

    5. Bloodsquirrel says:

      No, he very specifically notes that he doesn’t want a managed lanuage. C# is nice. I like it a lot more than Java. But not having low-level control over memory is a dealbreaker when you’re trying to write a high-performance graphics engine.

      C# is fast enough for most general applications, but games need all the performance they can get, and one of his explicit design goals is not to sacrifice any performance from C/C++.

      1. James Schend says:

        I know, but I think his insistence against a managed language is more based on ignorant opinion than reality, frankly.

        The bottleneck in games isn’t in the code anymore, and hasn’t been in a long time. Skyrim’s huge bloated crap engine only uses maybe 30% of my Intel I5, and I5 isn’t even close to Intel’s top-of-the-line. Even if C#’s overhead was an entire additional 30% (and it’s nowhere close), it wouldn’t affect performance of the game at all.

        Additionally, the hundreds of games actually programmed *in* C# prove him wrong at a practical level. It’s not like this is some crazy untested theory; this has been done before. Many, many, many times.

        1. Blake says:

          I’m going to disagree with you on this.
          Garbage collection is going to cause a lot of cache misses as it jumps a bunch of different places in memory. Your CPU won’t be working, because it spends all its time waiting on your RAM.
          I know in (console) games I’ve worked on we’ve had to do a LOT of work getting our Lua garbage collector to only collect certain amounts of memory per frame, which we do while we’re rendering. Even then we sometimes have frames where our garbage collection takes longer than our rendering (which is a reasonable sized chunk of our 16ms frame), meaning our garbage collection is slowing our frame rate.

          A garbage collected language WILL be slower than a language with manual memory, and the difference in speed might not matter to you but it certainly does to some of us.

          Besides, memory leaks are pretty easy to find (either using custom allocators or something like the win32 heap) and trivial to fix (unless you’ve done something very silly). In our games if we have even a single allocation that isn’t freed up when entering/exiting a level (and running a full GC cycle) we will hit an assert, and get output that shows us both the C and Lua callstack at the exact moment we allocated that memory. We’ll only run into memory leaks a couple of times per project, and when we do they’re very quickly dealt with.
          Spending more time on a garbage collector would give us a slower product for a miniscule saving in programming time.

  9. Kian says:

    I think he touches on this in the video (it’s been a while since I watched it), but garbage collection feels like an unnecessary crutch and we already have a better alternative in C++.

    Smart pointers are a kind of object that automatically manage freeing your memory, so you can’t ever do a double free, and prevent you from accessing memory that isn’t in a usable state. The idea is that instead of passing around raw pointers, wondering who owns the memory and if you have to free it yourself or if the object you pass it to or received it from handles it, you use smart pointers that make ownership explicit. So if you receive a raw pointer, the memory is not yours and you should use it but not free it. If you receive a smart pointer, you take ownership of the resource, and as long as you keep the pointer alive the memory won’t be freed.

    You can then choose to pass ownership of the data to another object, or keep it yourself, and the problem of managing memory is reduced to the simpler problem (because you have better tools) of managing ownership, which takes care of lifetime for you.

    Given this, using a GC is redundant. You don’t need a process that looks for orphaned memory, because you never create orphaned memory.

    1. Svick says:

      Smart pointers are definitely very useful in C++, but I really don’t think they’re better than GC (assuming you can afford GC).

      That’s because smart pointers are either too complicated or too slow.

      They’re too complicated, because you always need to decide whether you want normal (“stack-allocated”) object, reference, std::unique_ptr, std::shared_ptr or std::weak_ptr. And you need to know about move semantics if you’re going to use std::unique_ptr.

      You can solve that by using std::shared_ptr everywhere (or almost everywhere), but then your code becomes much slower, probably even slower than if you used GC (but also more deterministic, which could be important).

      (Also, the syntax is pretty bad, but that could be solved e.g. by using ^, like MS does in its “C++ with GC” dialects.)

      1. Kian says:

        I agree the syntax is bad. That comes from trying to keep as much compatibility with C as possible while introducing new concepts. A pity, but without it it wouldn’t have become as popular as it did.

        However, I think lack of familiarity is more of an issue than being “too complicated”. If you start following a handful of rules, it becomes fairly simple. std::unique_ptr covers almost every case where you would normally use a pointer.

        Essentially, you should use stack allocated objects unless you can’t. You should then use std::unique_ptr unless you can’t.

        With regards to function arguments, you should not use smart pointer parameters, unless you produce or capture resources. If you produce them, take a reference to the smart pointer (so you can hand it ownership). It’s better if you return a smart pointer, but some shops prefer to use the return parameter for error codes. If the function captures objects, take smart pointers by copy (so you will own them).

        That covers most of the use cases. Shared pointers are more useful for multi-threaded applications (in fact, their reference counting is required by the standard to be atomic), but if you’re writing concurrent code you should be well past the point where you find unique_ptr complicated.

      2. kdansky says:

        The syntax issues disappear as soon as you use “auto”, which you really should, because it makes refactoring a breeze.

        shared_ptr is slow, but it is also generally not needed. For 90% of all data, unique_ptr is correct, because you can reason very easily about who owns it, especially in a game engine. unique_ptr is also just as fast as a raw pointer, and behaves a bit like garbage collection: You can just do variable = nullptr, and it’s gone. You can’t delete it twice, you can’t forget to delete it, and if you overwrite it, that’s fine too.

      3. Blake says:

        Blow mentions all the ugly C++ library unique_ptr syntax and in his language he’s building in some of those features, I think he was going for something like:
        ptr : Foo*; // Normal pointer
        ptr : Foo!*; // Unique pointer

  10. AceCalhoon says:

    I was just reading about a language called Nim (at least I think that was the one). It has garbage collection, but allows it to be enabled/disabled at will. You can even run it for a specific amount of time.

    The example given was game code that drew a frame, and then spent whatever time was left afterwards collecting garbage.

    I guess it’s even binary-compatible with C (and maybe C++?). It sounds cool.

    1. Daemian Lucifer says:

      So thats the secret of nim.

      1. Purple Library Guy says:

        Aaaagh!
        You rat.

  11. Groboclown says:

    As for the plotting complexity / cost, there was a related graph in the early Agile programming books that tries to depict the cost of fixing bugs as the project advances. The underlying idea was that as a project gets more and more code written, there’s more things that interact, making tracking down the real source of the bug harder, and fixing it without breaking other interactions harder. Note that the easiest bug to fix is one that hasn’t been written yet (i.e. make sure you understand the problem correctly before you write the code).

    Now, I haven’t worked on AAA games, but in terms of very large software projects, this is generally the case. The Agile book in question was saying that correct use of patterns to encourage decoupling of code (keep one bit of code as independent from another piece of code), and advancement in development tools, and increased release frequency helps to flatten that cost line.

  12. Zukhramm says:

    On GC: This might change as I get to the more game related parts but so far, writing a 3D rendering engine in Java, most of the large pieces of memory are handled by the graphics card, so what way the language does never comes into it. Of course, once you get a larger amount of game objects with actual gameplay code this changes, but I’ll so how that goes when I get there.

    Mostly I use Java specifically because people tell me not to use it for games, just to see how far I get.

    1. Abnaxis says:

      Not trying to tell you what to do, but I spend a lot of my time hacking around in Java, and I am very glad I don’t work in games…

      1. Groboclown says:

        Java is very fun for games, especially when you try the 4k limitations (http://java4k.com), which unfortunately came to an end. Some people would consider this to be a form of self-abuse (yes I know both meanings).

        Modern Java games, like Minecraft and the Puppy Games (Titan Attacks, Droid Assault, etc) use a native OpenGL library for the rendering like lwjgl and LibGDX.

      2. Zukhramm says:

        Well I’m the weird kind who actually likes Java. I thought it was just a phase, that I’d find some nice, functional language and grow out of it, but after learning about languages like SML and OCaml, Haskell, Clojure and Scala, and liking them all, I still find that I still like Java.

        1. Cuthalion says:

          I actually like Java, too. There are things I like about Python (day job language) better, but I still enjoy Java and am using it for my 2d game project.

          It’s entirely possible to make a good game in Java. Whether my game would run faster without automatic GC or not I do not know. Maybe it would. But it’s doing fine so far.

          1. Richard says:

            The real problem with GC is that it is unpredictable. (It’s also clearly slower on modern predictive architectures)

            In a program without a GC, you can point to the various places within the program where the process of destroying an object and reclaiming its memory happens.
            – The programmer either writes or implies “Destroy this thing here”.
            Each one takes a small amount of time and reclaims a small amount of memory.

            If your program has any annoying ‘pauses’, you can examine the code around the pause to determine what you can do to eliminate that pause – perhaps by re-ordering stuff to avoid a cache miss, moving some of the work to a later point in time or whatever.

            When there is a GC, once each GC cycle a large number of those object destructions and memory reclaims get lumped together into one big blob of memory reclaiming action.
            – The programmer writes or implies “I don’t need that anymore”, and the GC destroys it later.

            In Java and C#, this also means that the destructor doesn’t run at destruction, instead it’s run when the GC asks the object to kill itself. (Other languages may operate differently.)
            – Very scarily, the destructor may never run at all, which helps performance but also means there is very little one can safely do there.
            You have manually invoke a dispose() method instead if you need to be sure it happens, eg writing a file footer/checksum.

            This process of object dismantling and reclaiming memory can happen at any moment, and it will practically always be a cache miss because the CPU is off doing something unrelated when the GC comes around.

            You can of course either ‘trick’ or explicitly tell the GC not to collect during critical sections, but it still needs to do that work sometime, and the less work it can do in one go the higher the total overhead of doing it.

    2. nm says:

      Isn’t Minecraft implemented in Java?

      1. Zukhramm says:

        Yeah, but it runs like garbage so it’s not really proof of anything.

        1. guy says:

          Stuff in Java is pretty much guaranteed to run like garbage compared to the same thing written in C++. Because it will ultimately run the program and also an interpreter. On the plus side it will generally run on any operating system and is highly unlikely to have attempting to modify one variable write garbage into a completely unrelated variable.

  13. Doug O says:

    …and the final paragraphs of the article capture the most critical point of real-world projects. Complexity is the enemy, maintainability is king. Or at least, it certainly should be for anything you know will last longer than a few months. (Hint: it will. Nothing is so permanent as a slapped-together prototype that Business decides is ‘good enough’. Oh wait, can you add tint control to that?)

    That’s not a problem easily solved by just writing a new language…because any language powerful enough to be useful is one that can be abused to create write-only code. Using the “right” language *helps* to be sure, but the actual tool is much less important than how it gets used.

  14. Ilseroth says:

    Regarding the Productivity section:

    I am currently dealing with exactly this. Granted I am still early in my project (fiddling with Unity and seeing of I can make something fun with it since it is free to mess with.) but the first few days I managed to knock out a ton of code and made something interesting…

    Yesterday I spent 6 hours trying to make the camera not spaz out in a certain set of circumstances.(unsuccessfully)
    (For someone who actually cares; it is a third person game and I want the camera to reset as far as it can without clipping through things. Currently it does what I want, but since the collision checker for some reason throws false negatives, I have to think of a means of checking to see if there has been a collision recently and if so prevent the camera from pushing out to max… But every time I have tried that it simple deactivates the pushing out completely.)

    Because of that today I have been delaying working on the project today… doesn’t help that since i ran fast and loose with the coding to start, have to shuffle a lot of it around; and restructure my camera system >.<.

    1. guy says:

      I’m guessing there’s a reason AAA games often let the camera drift into a wall, probably because collision detection algorithms are either slightly dodgy or slow. You would probably be best served fiddling with your visual culling algorithms so the camera can see out of the wall if it does drift in. I think the camera object will have the near and far planes associated with it, and if you move the near plane out a bit further it’ll hide things right in front of the camera.

      Alternately, IIRC Unity’s collision detection mostly acts up when moving an object quickly through thin things, because objects move in discrete jumps and the moving object jumps from one side to the other without ever actually overlapping. If you make your walls thicker the issue should go away.

      1. Ilseroth says:

        I actually fixed it… and it feels… so damn good…

        I wrote and rewrote line after line yesterday….

        I fixed it in 5 lines of code.

  15. guy says:

    Re: garbage collection.

    There’s a bunch of things up with it. First, it does take memory and CPU cycles to attempt to garbage-collect, because it basically has to look at all the program memory to decide what can be thrown out. Also, if you allocate a new object and there isn’t presently space for it, the garbage collector runs. So you’ll take a big performance hit if you’re using nearly all available memory and adding and removing things frequently compared to deallocating one thing manually when you’re done using it and then allocating a new thing.

    The limited invocation thing may or may not work depending on architecture and blind luck. In the typical Java version, the garbage collector first goes and checks what bits of memory can still be reached and then deallocates anything that can’t. So if it doesn’t have enough time to finish marking everything it’s not going to remove anything. There are versions where it can run in segments or concurrently, but it still takes at least as much total time, just more spread out.

    Other methods avoid this in exchange for having weird issues with how good they are at actually freeing up memory. For instance, reference counting tracks how many pointers to a piece of memory currently exist and deallocate memory when that number is zero, but in a doubly-linked list the nodes will have references to each other, and if you delete the pointers to the head/tail nodes without cleaning things up, the other nodes will still have references to them and so won’t be deleted. There are fixes to that and problems with the fixes.

    1. fscan says:

      In addition, i think you will spend a lot of time profiling and debugging of random game stuttering because of garbage collection. The problem is not the overall performance impact but the unpredictability.
      Also, garbage collection only handles memory and not sockets, files or other resources. This is why you see this cumbersome “using” blocks in languages like C# and others.
      I think C++ has it right. Yes, you *can* do explicit memory management, but in practice you never need it (mostly when interfacing with older c libraries .. just wrap them).
      The only thing you have to do is to think about of who *owns* which resource (which is a good thing). In the rare case it’s not clear use a shared_ptr.
      In most cases if you need some memory or data structure the correct answer is “use vector” anyway :)

  16. Dungeons and Dragons Online sailed over that complexity horizon like, 15 updates ago, so EVERY update also includes a couple of patches and sometimes half-a-dozen hotfixes. Some things break on almost EVERY update. I feel sorry for their coders, but they kind of brought this on themselves because all the early game content was built by hand instead of under a generalized system, so, of course, NINE YEARS down the road NOBODY knows how that shit works any more.

  17. tmtvl says:

    Funny, I usually spend 60% of the time on the first 10%. Mainly because I plan out very meticulously.

  18. Decius says:

    When you want more marines but don’t have enough room in the array, a voice should boom out:

    You must allocate additional memory.

    1. Daemian Lucifer says:

      Since its space marines,it should be:

      Additional memory allocation required

  19. CrazyYarick says:

    I was looking at the forums of a game engine I’m super interested in messing with as soon as I have time, an I read this “If I remember correctly Godot cleans up once per frame (at least everything that was queue_freed), not like Java or C# where the there is no guaranty to when it will clean up.
    The advantage GDScript and Godots object-system has is that it was designed with a gameloop in mind, so the cleanup can be done as soon as possible without interfering with the games logic.”
    http://www.godotengine.org/forum/viewtopic.php?f=9&t=1650&start=10

    The Godot engine guys( to many people’s ire) created their own scripting language. If this is accurate then it might be a nice compromise on the GC side of things for games. At least for us poor schmucks that can’t/REALLY don’t want to write C++.

  20. Rick says:

    Whenever I come to a new post and see the tiny draggable part of the scrollbar, I think “woo too, huge write-up”… Then, as I finish reading the post, I realise how many comments there are.

  21. MaxEd says:

    I don’t really see how a new language can solve complexity problem. From my experience in game development, the problem is quite simple: nobody can hold the whole project in his head, and therefore any change could break something else. Of course, it’s a sign of bad architecture, but somehow EVERY project ends up with bad architecture in the end. That clean, beautiful structure you modelled in UML before writing the first line of code usually does not survive the first contact with changes in design handed down by game designer. Suddenly, pieces of game logic that were completely unrelated before need to know about each other, and you can either add a quick hack, or rewrite a large chunk of code just to maintain architecture. Guess which approach is choosen more often… But of course you don’t usually have a good architecture to begin with. You may think you do, but other people won’t necessary see it that way, especially if you don’t provide any documentation (in my experience, nobody does, although I guess things are a bit different in largeg compaines).

    So I don’t think we need a new language – with or without garbage collection. What we need is a completely new tools, which would take some of the burden from our stupid monkey brains. For past year, I maybe spent two days tracking down memory leaks. The rest of bug-fixing time was spent on things like endless corner cases in AI and UI code (two worst parts of our current game). No language could have prevented the fact that pathfinding code used a slightly different measure of distance to target than attack code, so units ended up in situations where they could not attack their target, but wouldn’t move any closer to it, because they thought they were close enough already.

    1. guy says:

      Wait, why did you even have two different kinds of distance calculation? What purpose could that possibly serve? Not doing that is literally the entire point of having function calls.

      Also, document your code.

      1. MaxEd says:

        Here was a comment with some sample code, but WordPress ate it :(

        Things are a bit tricky in our game. The problem is, units can actually move in continuous 2D space. But A* pathfinding works with square cells. To make units’ paths look good, we first find path via A*, and then do all kinds of post-processing on it. This leads to a lot of problems. I already forgot some important details, but one example:

        Units needs to find path to place, from which it can attack target building. For ranged units, this covers a lot of ground around the building. Unit can stand in any of those squares and fire on it. So, when we start A*, we mark all those cells as “target” cells. When unit reaches any one of them, algorithm stops. However, when unit finally gets where our post-processed path led it, it may find it can’t ACTUALLY attack from this part of the cell, because target building’s point is only reachable from a portion of cell.

        Let’s start a flamewar :) Document code how? Comments become obsolete faster than they are written. High-level documentation is hard to maintain, and won’t contain all the little, but important details, like how setting state A from function B may lead to whole UI freezing, because it’s later checked in function C (but only if condition D is satisfied) at the wrong time or something.

        1. Daemian Lucifer says:

          Document code like this.

          My favorite is:

          // Hocus Pocus, grab the focus
          winSetFocus(…)

          Clit index is also funny.

    2. Kian says:

      I think the goal should be to use well understood patterns to decouple as much of your game’s logic as you can.

      For example, instead of having an update method in each entity, break your entities down into components and update components one at a time. Instead of different systems calling each other, have an event system where each system posts events and reads events from.

      This requires some work upfront, but it ensures that complexity doesn’t run away from you so quickly. By keeping things separate, you minimize the chance that a fix in one part will break some unrelated thing.

      Also, avoid hard-coding things, dump as much data as you can in scripts. You can iterate faster if you can change something’s behavior while the application is running and see the result immediately. Of course, this has a performance cost, so you should not abuse it. Physics, path-finding, etc, should probably be coded, but maps and weapons could be scripted.

      1. MaxEd says:

        Yeah, sure, it’s all true, but that’s just what I’ve been talking about in my comment: no language can help you with that. Language is not the problem. Oh, sure, C++ have some things that make creating clean and flexible architecture harder, limitations and some uncomfortable syntax. But I guess any language one can dream up would have those, so yet again, I see no point in creating a new language.

        1. Blake says:

          Reducing boilerplate code is a big one.
          The effort of writing both header and CPP filed in C++ doesn’t need to be there with modern systems. C++ requires it because of how the language is built, but it does make everything slightly slower every day.

          He’s designing his language with factorability in mind, making it easy to change things from local functions to class functions to global functions simply by moving it, no fiddling with other syntax required.
          Then there’s things like changing how arrays are laid out in memory from array-of-struct to struct-of-array simply by putting SOA at the end of the definition.
          Changing the memory layout in C++ would be a much bigger hassle, slower, and more error-prone, so making having language support reduces the complexity the programmers need to think about.

          Also other things like his ‘defer’ statement that runs something at the end of the scope. So that code like:
          Fn(){
          a = new A();
          if (reason) return;
          delete a;
          return;
          }
          doesn’t come up as often.
          In C++ you would have to write some scoped deleter class or a goto: cleanup or remember to call ‘delete a’ before every return statement in your Fn(). In blows language you just follow ‘a = new A()’ with ‘defer delete a’ and in 6 months time when someone adds another return statement deep in the body of your function, they don’t need to think about that little A you created earlier.

          C++ gets the job done, but computing has changed in the past 32 years and we can do better.

          1. Kian says:

            In C++ you’d use std::unique_ptr, no need to write your own scoped pointer class.

  22. Bropocalypse says:

    I really like the idea of a language that allows both manual AND automatic garbage collection. It makes sense to me; The only reason for not implementing it as far as I’m aware, if not the lack of the idea, is the effort required to implement it.
    Actually, three modes would be ideal, full automatic, full manual, and allocated time automatic. Just do automatic garbage collection when there’s not much else going on, for example, or have it do it when resources are being over-used…. There’s a potential swath of options for these modes. I think having more options is better than fewer, but then arguably you get into the “high concept” area of language design. Which is still not a bad thing, necessarily.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to guy Cancel reply

Your email address will not be published.