John Carmack 2012 Keynote Annotated:
Part 3

By Shamus Posted Friday Aug 10, 2012

Filed under: Video Games 81 comments

splash_quakecon2012.jpg

At the risk of repeating myself: Here is the full presentation. My comments with timestamps follow.


Link (YouTube)

“You have paid the price for your lack of vision!”

16:50: “RAGE did feel a little stiff.”

He’s talking about the interactivity of Doom 3 versus how static Rage felt.

In Doom 3, there were moving lights and machinery all over the place. In every room it seemed like there was a set-piece industrial machine doing machine-stuff. If you love watching little assembly-line movies like I do then these were fun to watch. Lights moved. The computers were interactive. The televisions had shows on. There were in-world videogames. There were physics objects and other dynamic items.

In RAGE, very little moves. He said it feels “stiff”, but for me it felt “dead”. Gorgeous, but dead. The thing is: We’ve got this astounding megatexturing system, and a lot of times I think we wound up with it acting as a very expensive form of skybox.

I’m not trying to claim that megatexturing is a failure or anything, but if I was making a game I know I’d pick id Tech 4 (Doom 3 engine) over id Tech 5 (Rage) in a heartbeat, even without giving the Tech 4 any kind of graphical overhaul. Then again, I’m a sort of graphics hipster in a relative sense, so I’m probably not the best person to ask. (I don’t think graphics were better in the past, just more efficient in a strict cost / benefit sense.)

Back when I was messing around with the Doom 3 engine, I was really excited about the real-time lighting. My thought was that it could be used to solve certain problems with procedural generation of indoor spaces. If you want to make a procedurally generated world, I see there being three major challenges to overcome:

  1. Culling – Without doing pre-computation on a level (because you just poofed it into existence when the player began a new game) how do you figure out what you should and shouldn’t be drawing?
  2. Pathing – Without analyzing the level geometry and building a network of waypoints, how can monsters and NPC’s understand how to navigate the world?
  3. Lighting – How do you generate shadows?

I’d always assumed #3 was the hardest, but id Tech 4 seemed to solve it. I think #1 can be solved with a few simple tricks, allowing for the fact that we have so much horsepower now that we don’t need to perfectly optimize a level.

Pathing is still a bit dodgy in my mind, although it’s not exactly my area of expertise. (For the record, when I’m talking about “pathing” I’m not talking about the A* search algorithm where it figures out how to get from A to B. That’s “easy”, in a relative sense. I’m talking about figuring out if A and B exist, where they are, and how they relate to each other. This is often done by hand, or through an expensive analysis during development. If we want to make “random” levels then we need to be able to build footpaths through them that an AI can follow.

17:40: “We did make the decision to close up our mobile development.

A few years back id began this experimental thing where they made games for portable devices. At the time – and this was bloody ages ago – I remember commenting that this was a really interesting move because it let Carmack work with his strengths, which has always been related to efficiency and performance. LOTS of people can make fancy photorealistic lighting these days, if only because the hardware is so ridiculously powerful. NOT many people can get a really tight rendering pipeline running on (say) a system with only 250MB of memory or some other crippling limit.

And now Bethesda has closed it. Even though it was making money. Because…

18:05: “The Bethesda Family really is about swinging for the fences”.

Publishers and their AAA focus are beginning to sound like Orks screaming “MOAR DAKKA!“, except they’re using guns that shoot money and they don’t know what they’re aiming at. This sort of single-minded AAA monomania is self-defeating. Remember when EA bought casual developer Playfish for $275 MILLION? What a massive, massive pile of cash, all to buy a company I’d never even heard of. They paid an extreme premium because they abruptly decided they needed to jump in on this casual gaming thing, and they didn’t want to take the time to grow their own division internally.

And then social games became the big thing, and suddenly everyone wanted in on THAT action.

If you’ve got a small group of people who have some skill at task A, and it’s making money, then don’t kill A so you can have 5% more B. There’s value in diversifying, in keeping a lot of irons hot at once. If A becomes the Next Big Thing in the industry, you’ll have a small team that already knows the ropes and you’ll be ready to take advantage when the moment comes. You can expand your little A team into a whole division and you won’t have to crash-spend to catch up. It’s very likely those people doing A because they love it will be far more productive at A than at B. Human beings are not interchangeable light bulbs that can be moved from one task to the next without penalty.

It’s like a Civilization player: Oh man, science is the most important. So I’ll put everything into science. No expansion. No military units. No growth. Everything into science. Man, how are these other guys ahead of me? I should have the most science!

It would be one thing if AAA games were these sure-fire hits, but they’re not. The AAA realm is looking more and more like Vegas every year. If you win big you get Modern Warfare 2. And you HAVE to win big, because even mild success is failure.

Don’t take people out of “mildly profitable success” so you can cram more money into the AAA slot machine, when you can’t even know the payouts or the odds of winning.

21:00: “We haven’t tapped out the potential of what we’ve got”.

Carmack just spent the last couple of minutes talking about the upheaval of changing the development pipeline, where a major change to the engine means everyone has to re-learn their jobs. (My words, not his.) Now he’s saying that even though we’re seven years into this console generation (give or take) we’re still finding untapped power in the machines. And I was just pointing out how games cost so much to make that you can actually sell five million copies and lose money.

I love to see this industry do well, and I’m grieved when I see losses. Having said that, it will take fantastic effort for me to feel sympathy if Microsoft or Sony are foolish enough to launch a new console in the environment. So far such things have been empty rumor and speculation. Let’s hope it stays that way.

 


From The Archives:
 

81 thoughts on “John Carmack 2012 Keynote Annotated:
Part 3

  1. Nick Pitino says:

    If those goons release another console any time soon I’ll rebel a step backwards by buying a used PS2 and only buying games for it off of eBay.

    1. Aanok says:

      On the other hand though, the fork between console and PC hardware is getting very uncomfortable. One of the reasons RAGE performed so poorly on the PCs (that is: it was quite inferior to the average for AAA) is because its founding architecture was constricted by console hardware.

      Altough it’s absolutely true that as radical a change to the developement environment as a new console generation will very much likely be a tremendous blow to software houses, I don’t know how long we can still manage with our current state of things as well.

      It’s an arguable decision and, as that, it’s defensible somewhat. I wouldn’t be surprised at all if especially MS decided to sell a new generation of XboX. Even more so if paired with Win8 (God save us all).

      1. psivamp says:

        On the other hand, if we were constantly pushing the graphics envelope on all fronts, Deus Ex: Human Revolution probably wouldn’t have run on my two year old laptop and that would have sucked for me.

        Having the consoles as a common ground target for games keeps developers aiming to have a wide range of supported specs. It’s not all bad.

        1. Eruanno says:

          Exactly this. I like not having to buy a new computer every couple of years, because developers actually have to aim for a large spectrum of hardware. Imagine if every game was The Witcher 2! I’d be buying a new graphics card every other year. Urgh.
          (And yes, I know that they released The Witcher 2 for Xbox. I own it, and it actually runs really well as compared to the PC version that makes my PC weep for mercy.)

          1. stratigo says:

            This so damn much. I cannot afford to buy a computer every few years to keep up with LOL GRAAAAAAAPHICS. It is nuts the emphasis on making graphics better. They are good now. Stop trying to beat the dead horse.

        2. lethal_guitar says:

          Oh yes, this is my opinion too. My PC is from 2008, I only upgraded the graphics card once. Yet, I can play most newer games at acceptable framerates with reasonable settings (sometimes even rather high settings). Especially Unreal Engine 3 based games are pretty much guaranteed to run smoothly. Now with (seemingly) almost every game being based on that engine today, I can play most titles without problems :D

        3. Urthman says:

          People talk about how someday we’ll have machines that do photorealism and then the graphics hardware arms race will stop.

          For me, that day has already come. Half-Life 2: Episode 2 is good enough. If a game looks at least that good, I don’t have any need for it to look any better.

          Better, more imaginative artwork? Of course. But actual technological improvement requiring more hardware? I’ve got all I need.

    2. Trix says:

      Given the wide variety of pretty awesome games in the PS2 library, this doesn’t seem like a bad idea.

  2. jarppi says:

    “1. Culling ““ Without doing pre-computation on a level (because you just poofed it into existence when the player began a new game) how do you figure out what you should and shouldn't be drawing?”

    Shamus, one of your ‘major challenges’ has already been overcame. Solution is called Umbra software. It is a run-time sofware you can put into your game engine and it does exactly what you are looking for here.

    See how it works. It has already been used in several games (Alan Wake, Mass Effect 2 & 3, etc)

    More videos:
    http://www.umbrasoftware.com/en/umbra-3/demos/

    1. Shamus says:

      58 seconds into the video it explicitly refers to portal data constructed during pre-computation. That’s not a solution to the problem, sadly.

      1. jarppi says:

        Reading fail.. Didn’t really pay attention to ‘Without doing pre-computation’. My bad.

        Btw. I don’t think it is possible to do that without any kind of pre-computation. Like you said level just came to existense from nowhere so there is no way of knowing immedianetely what you need to draw. Ofc, there is always ‘bulldoze-aproach’ but that is not a good solution. But then again, I’m not an expert in this stuff so I may be wrong.

      2. Paul Spooner says:

        Why can’t you do pre-computation on a parametrically generated world? I suspect this could be an article in itself.

      3. Leonardo Herrera says:

        I think this could be solved by making the world generation algorithms produce culling information right from the start. This may constrain the world-generation part quite a bit, thought.

  3. Cody211282 says:

    I would love to make a comment on how the industry needs to find a better way of making games, but all I can think of is Carmack going “Pew Pew” with his finger guns.

  4. MrPyro says:

    It's like a Civilization player: Oh man, science is the most important. So I'll put everything into science. No expansion. No military units. No growth. Everything into science. Man, how are these other guys ahead of me? I should have the most science!

    When did you start spying on my Civilisation games?

    1. Sumanai says:

      Stop mocking my strategy Shamus! It works well until it doesn’t!

      (Yes, I’ve done that as well. Many, many times.)

  5. CaptainMaybe says:

    Just out of curiosity, how do you think the Ouya is going to fare?

    1. Nikos Saripoulos says:

      +1 to that!

    2. Indy says:

      Given the story by Checkpoint on Penny Arcade, the Ouya looks like it might be a stupid idea. Tying a mobile game to a television doesn’t seem intuitive to me. Giving nothing but free games on the other hand… Nah, it’ll flop.

      1. Johan says:

        “All the power of a phone, all the portability of a console”

        Yeah, the signs don’t look good

        1. Vekni says:

          My friends are very excited for Ouya. They haven’t bought a console since the 90s. They’re hitting a sweet spot:portable sized gaming in a comfortable environment with a familiar nongimmicky controlscheme at a cheap price with open source multifunction. I don’t know anyone who wants a new video game console as Sony/Micro/N seem to put out..

      2. Aldowyn says:

        Technically every game just needs to have a free component, even if it’s just an extensive demo.

  6. If EA can’t make money on 5 million sales then there is reason for a shareholder revolt and re-evaluation of the entire AAA space. Call each sale $30 for EA and then ask how you can’t turn a profit on $150 million for a series that already has 2-3 million fans invested in it.

    If we go with a (low-ball for 2012?) $10k/month total cost for developers then that money would pay for 1250 man years of work. A decade of a 125 man team of devs working full time on the product to eat through that money.

    But EA are probably thinking they might spend $100 million on advertising to try and roll the dice on a 10+ million sales explosion (a breakout AAA ‘hit’). Spending over 800 man years of developer pay on a risky move (for a series with consistent sales in the low millions) that does not increase the quality of the final product in any way. Because why pay for developers when you can pay for advertising?

    1. Robyrt says:

      Yeah, that’s a low-ball estimate for 2012. Assassin’s Creed (2007) had 100-some developers, Assassin’s Creed 2 (2009) had 450 developers. Now that they’re on a yearly release schedule, I can’t see them staffing back down to 120 again.

      And yes, a huge part of that budget is marketing and production. To use the same example, Assassin’s Creed Revelations made 60% of its sales within the first 24 hours. At that point, the team getting you more pre-orders is generating as much or more revenue than the team making the game.

    2. psivamp says:

      The best thing about this, for me, is that I really liked Dead Space. Then they had that horrible campaign for Dead Space 2 and I didn’t buy because of their marketing. I was a practically guaranteed sale that they lost with their stupid marketing.

      1. newdarkcloud says:

        Don’t forget the Dante’s Inferno campaign and the Mass Effect 3 ads that completely missed the point of the franchise.

        Although, it appears Bioware also missed it’s own point, so that might not be their marketers fault.

        1. Aldowyn says:

          Those ME3 ads started with ME2, just saying. All the dumb gamestop preorder ads that seem to be all I see with Mass Effect…

    3. Sabrdance (MatthewH) says:

      Hollywood has been doing this for years, though, so irrationality can last a while.

      Of course, one lousy summer can potentially knock some sense into everyone.

  7. Raygereio says:

    Having said that, it will take fantastic effort for me to feel sympathy if Microsoft or Sony are foolish enough to launch a new console in the environment. So far such things have been empty rumor and speculation. Let's hope it stays that way.
    Personally, a new console generation will fill me with rage because it means I’ll have to buy a new computer for no good reason.
    Dammit, I like the fact that system requirements have become easy. Don’t mess that up, industry.

    Here’s a dumb question: is there any technical reason why megatexturing and “regular” texturing methods can’t be combined, utilizing the strengths of both?

    1. Infinitron says:

      Personally, a new console generation will fill me with rage because it means I'll have to buy a new computer for no good reason.

      Doubtful. Even the new generation of consoles will be weaker than a modern PC. Unless your box is from 2005 or something.

      Also, it will take some time before developers learn how to fully exploit the new hardware.

      1. Mephane says:

        But the PC versions of many games made specifically for consoles and for PCs only as an afterthought will probably perform so poorly that you still need new hardware to catch up.

        1. Infinitron says:

          They’re usually not that bad.

          1. Mephane says:

            It appears so because now performance of newer PC hardware has increased while console performance remains static, and thus, in relation to the games’ demands on processor speed etc., PCs have become faster. A new console generation will lead to a sudden jump hardware requirements, and many computers that are a few years old will have trouble keeping up.

            1. Infinitron says:

              We’ll see. I think the “poor optimization” thing is an exaggerated phenomenon, mostly confined to a few particular infamous titles.

      2. Sumanai says:

        They use the same base technology for consoles, so I don’t see how they couldn’t jump a couple of PC tech generations and therefore catch up enough that the worse optimisation for the PC couldn’t do rest of the work.

        Or the fact that for a bunch of people the resolution on the PC has moved past the console versions, assuming it’s adjustable. But if it isn’t, it’s most likely a quick port to begin with, and therefore poorly optimised.

        Besides the people that are demanding for a new generation want it to be really, really powerful, so they’d have to do a pretty big jump if they want to appease them.

        Not saying they should do that. The only thing that I see as a problem with the current generation is the amount of RAM, but I saw that as a problem when they were released so I might not be the right person to talk about that.

  8. X2Eliah says:

    Personally, I WANT a new console generation because I am so very much fed up with great games being crippled because their levels / areas are adjusted to suit A GODDAMN 512MB OF SYSTEM MEMORY. Gaming computers have 2, 4, 8, heck – 16 gig memories now. Okay, 8 and 16 may be a stretch, but come on – even the most pitiful baseline has FOUR TIMES the system ram that games are FORCED to be developed for.

    So no, I disagree that we don’t need a new console. Fact is, we don’t need a new graphical potential. Graphics are fine, stop pushing those pixels for godssakes. But consoles need to improve the resource availability, so that gameplay and mechanics are not crippled by the lacking memory capacity (ram) / access (disc).

    The whole megatexture thing, far as I recall, was in very large part developed to deal with console-specific problems. Problems that no reasonable PC has.

    1. William Willing says:

      That’s a fair point, actually.

      One of the strengths of consoles is the unified hardware architecture. Developers know what they are developing for and consumers get the benefit of not having to worry about system specs.

      I wonder, though, would you lose that strength if consoles had expendable RAM? Would that add to development and/or QA costs? Otherwise, put 8GB of RAM in the Xbox 360 and PS3 and we’re good to go for a couple more years.

    2. Infinitron says:

      Amen to this.

    3. Mephane says:

      Aye. I have friggin 8 GB of RAM in my machine, which means a single game could make use of the entire 32 GB address space and still there would be more RAM than ever needed for the OS and all the other programs like Steam, a browser, chat, voice chat etc.

      (I remember the days when it was crucial to game performance to close as many background processes as possible, even if they are idle, just to free up memory. I am so glad those times are over.)

      But no, many console ports prefer to peruse the number one bottleneck of many PCs – harddrive latency and bandwidth. In some game forums people are already advising using RAMDISK to speed up loading times by putting the parts of or even the entire game directory in RAM.

      (Please don’t mention SSDs. They are still too expensive in comparison to traditional hard drives, especially when you don’t just pay the same money for a fraction of the space of a mechanical disk, but have to replace the device more frequently. And even if that were not the case, this is but a band-aid to the problem.)

      1. Felblood says:

        (I remember the days when it was crucial to game performance to close as many background processes as possible, even if they are idle, just to free up memory. I am so glad those times are over.)

        Clearly, you don’t use my wife’s laptop.

    4. Phill says:

      Pretty sure we are going to see a new console generation relatively soon – Sony and Microsoft don’t want to risk getting left behind by the WiiU (and for all the gamer talk about how that won’t happen (which is probably true), execs don’t think like that. Being the guy who launched a new console that flopped 7-8 years after the last one isn’t great, but being the guy who didn’t launch a new console and saw a competitor clean up is fatal – i.e. which decision has more downside if they are wrong…)

      And developers within the games industry ought to be getting hold of dev kits for next gen consoles some time to start developing launch or near-launch titles. While there are non-disclosure agreements attached to these when they do turn up, word unofficially gets around within the industry, and there are rumours…

    5. meyerkev says:

      16 GB. You piker. 32 GB of RAM is like $250 these days.

      /Shamefully admits to doing this in his new laptop.
      //Of course, given that I’ve seen school projects use 43 GB of combined RAM/swap before, it’s not a complete waste.
      ///In complete and total agreement with this sentiment.

      1. krellen says:

        $250 is a lot of money.

      2. C says:

        Your school projects require 43 GB of RAM or your school projects use 43 GB of RAM? Either way, seems like someone is doing something wrong here.

        1. Alan says:

          Or something very, very right. :-)

          Graduate level work in the right areas (high resolution simulations leap to mind) could easily use that much memory with good reason. Admittedly, most people using 43 GB of RAM are just wasting it.

          1. C says:

            That’s fair. The term project made me think of undergraduate work. Actual research is a different matter.

            1. Sabrdance (MatthewH) says:

              I had a stats problem that made my 4 gigs of RAM churn overnight.

              That stats problem involved 24 matrices with 3k cells each and repeated the process 10k times, but it ran overnight.

              Still, yeah, this doesn’t come up much.

              On the other hand, I was talking to a biologist I know who mentioned running some gene sequencing that required 80 parallel processors to cruch in a reasonable amount of time.

              1. C says:

                Yeah, even basic resequencing (i.e. substitution mutations + errors only) is extremely memory intensive. The basic problem is a genome of length ~3 billion, with 300 million reads of length 30 . . . even using the most compact representation (2 bits) that’s 3 GB of RAM right there. I had to do that for my computational genomics class with a 5 GB maximum. Fun stuff.

          2. meyerkev says:

            It was undergrad (and not my code). Doing Google Pagerank on a 3 GB input file. I think we had ~200k nodes, and you have to maintain links between most of them plus values on all those nodes. Figure each node averages 10k connections at 4 bytes each (and probably double or triple this number depending on the data structure and it’s particular memory patterns) plus probably 30-50 bytes of bookkeeping per node, and you end up with these HUGE RAM requirements. Combine it with not sleeping for a few days, and not being particularly optimized because we were both dumb enough to take 4 CS classes in one semester, and hadn’t slept in a bit… yeah. I admit that it was really bad one-off code to generate a file that no one was going to ever see (or grade).

            /Also had some AI stuff that would take 3 days to run because each decision “node” had ~27 child “nodes” and you had to search them ALL. So going 8 moves deep meant you had to search 20 million separate combinations and were storing a good 4-5 million at any given time.
            //And the code that had 4 different data structures holding the same info in 4 different ways depending on access needs, because RAM was cheap, inserts/deletes were rare, and accesses were common and dog slow otherwise.
            ///And then there was the code I wrote for work where I gave it 16, then 24, then 40 and it ran out of memory after 3 days. (Giant DB dump. Rewrote it to get parts of the db over and over and dropped it down to ~8 GB).
            ////And let’s not talk about any code I write after 2-3 days without sleep.
            /////Best part of summer internships: Getting to sleep 6 hours a night.

            1. C says:

              That’s fair. Seems like a ridiculously large input to drop on undergraduates, but then again professors aren’t always reasonable.

    6. Eruanno says:

      What about…

      Skyrim – Huge, open world and pretty darn big indoor areas (okay, they are sometimes chopped into smaller pieces, but that usually just makes for good save points)
      Assassin’s Creed – Large cities, shitloads of crowds of people
      Battlefield 3 – Huge multiplayer maps (okay, so it doesn’t get 64 people per match on console, but the maps are still pretty darn huge)
      Not to mention Saints Row, GTA, etc. with large open worlds and many ‘splosions.

      Just sayin’.

      1. burningdragoon says:

        Skyrim has some pretty huge memory issues after a large amount of hours, especially (only?) on the PS3. So… not the best example.

    7. 4th Dimension says:

      As far as I know the prime advantage of megatexturing is that you don’t use regular textures that repeat and whose repetition is visible to human eye. . With MT, your artists can literally PAINT the let’s say landscape without worrying too much about the costs.

    8. Taellosse says:

      While, at this late date, there’s absolutely some truth to this comparison, it isn’t as bad as it sounds, to be comparing the 512MB of the XBox to the 2-6GB of a typical gaming PC. As a dedicated device, a console can use its RAM MUCH more efficiently than a PC ever does (all those background processes eat up a good deal of capacity). The 512MB of a current-gen console is, I think, vaguely equivalent to ~1.5GB on a PC, performance-wise.

  9. MrGamer says:

    Hmm, I would really like to see more optimization work done in the industry lately, too many CTDs, uneven frames, internal software problems, it frustrates me to no end when I can’t even play my damn games a basic level.

  10. MadTinkerer says:

    “Don't take people out of “mildly profitable success” so you can cram more money into the AAA slot machine, when you can't even know the payouts or the odds of winning.”

    On the other hand, if you let people do whatever they want, you end up with two major new franchises (one of which is a hugely popular F2P esport), Two major sequels, one major remake/upgrade/re-release, One free release of an experiment that had run it’s course but was good enough to release to the public, plentiful upgrades to everything, lots of software tools for everybody, wearable computing prototypes, the expensive but really neat Razer Hydra, one billion virtual hats…

    And no Half Life 3.

    Hard to say which I’d choose.

    1. Mephane says:

      In my mind, Portal alone makes up for the absence of Ep3 or HL3. And it is not as if Valve is special in not continuing a highly popular series, so this is not a drawback particular to Valve’s approach.

      1. Sabrdance (MatthewH) says:

        Depends on what your company is geared towards.

        Valve is an innovation company like the Lockhead-Martin Skunkworks. It produces some great novelty items and a bunch of hits, but there’s probably a lot lost in the process.

        EA is geared towards building franchises and keeping their schedules. Get the games out by Christmas every year.

        Depends on what you want from your companies.

        1. MadTinkerer says:

          Well I want to kill some damn space slugs that murdered my sidekick’s father! Is that really too much to ask?

  11. MadTinkerer says:

    Oh hey: super easy fix for pathing, BTW. Do it the Minecraft way.

    Your world is procedurally generated, right? So the computer already knows what is solid and what is space-to-move-through. “Floor” is just space-to-move-through that is on top of Solid. So just have all of the AIs look at the local “floor spaces” when they do their A*, and double-check every so often in case the player alters the terrain, or there are moving platforms.

    Also see Descent the board game (seriously: free rulebook for the game at FFG’s site) or Chess.

    If your game can have terrain that is manually arbitrarily scaled by level designers, you need something more complicated, like Source’s navigation node meshs. But all of your example ProcGen projects as well as Minecraft and all Roguelikes are deliberately simplistic grid-based worlds. Just use the grid!

    1. ferry says:

      The guys over at wolfire.com are also using navigation meshes in their game. Given they amount of discussion they posted regarding their engine, it’s almost open sources, without the code. They posted this in regards to pathfinding (it also leads to a more detailed blog post). Their game does not have dynamic generation of levels, but players can easily alter a level in-game and play it in seconds, so I think AI navigation is something that’s almost solved.

      1. Phill says:

        AI navigation is pretty much solved for some kinds of problems. Given a series of lines defining the navigable area (such as your cave / tunnel walls – the lines are the edges of the walkable area) it is a solved problem to a) break that space down into convex polygons and b) find the shortest path from A to B within that set of polygons. As long as your game entity is a mathematical point.

        The complications arise when your AI entity has an actual height and width, can jump, can walk of edges and drop down, has inertia, is trying to get to a moving desination (the player, who just won’t stand still). Having pathfinding that is situational would also be nice – if I’m out for a stroll (or am a patrolling AI) I will take path A – if I am in a combat situation (guard has spotted you) I’ll be rather more willing to vault over that low obstacle to get to where I want to go. And you might want memory too – it’s not hard to find videos of people in e.g. World of Warcraft soloing tough mobs that ought to crush them by getting the mob to run backwards and forwards along some track to get to melee range while the player pings them to death and keeps shifting position slightly (but enough to make the mob switch between point A and point B as where it wants to get to to hit the player).

        So while some classes of pathfinding are indeed a solved problem, there are always more wrinkles that can be added in for the sake of greater realism. And there are almost always going to be pathalogical (pun not intended) corner cases that can be exploited.

  12. rayen says:

    “…beginning to sound like Orks screaming “MOAR DAKKA!”, except they're using guns that shoot money and they don't know what they're aiming at…”

    If those orks are the Bad Moons, The pubs are exactly like orks. Also sadly as they say never enough Dakka.

  13. Paul Spooner says:

    I’m eagerly awaiting the next installation, as 21:00 begins the “this sounds exactly like what Shamus has been harping on for years” section of the keynote.

    The posts to date have been good of course… but folks, it’s about to get real.

  14. Daemian Lucifer says:

    “In Doom 3, there were moving lights and machinery all over the place.”

    Wait,there were lights in doom 3?I mustve missed them.

    1. Bryan says:

      Well, there was the one that caused your gun to disappear… :-P

    2. Christian Severin says:

      … said the Light-Bringer.

  15. thebigJ_A says:

    Rumors and speculation? People in the industry already have “Durango” (the next xbox’s codename), they just aren’t allowed to talk about it yet. it’s why E3 was so barren this year, everyone’s holding on till the console the new games will be on is announced.

    Journalists have been talking about it informally for rather a while now.

  16. Eric says:

    “I'm not trying to claim that megatexturing is a failure or anything, but if I was making a game I know I'd pick id Tech 4 (Doom 3 engine) over id Tech 5 (Rage) in a heartbeat, even without giving the Tech 4 any kind of graphical overhaul. Then again, I'm a sort of graphics hipster in a relative sense, so I'm probably not the best person to ask. (I don't think graphics were better in the past, just more efficient in a strict cost / benefit sense.) ”

    As far as I know, the only reason Tech 5 uses pre-calculated lights is for performance reasons. From a technical perspective I don’t think there’s anything in particular that precludes the use of fully dynamic lighting, other than the fact that it’s much more intensive and not at all appropriate to use on consoles without significant sacrifices.

    The only real downside of Tech 5 and MegaTexture is that you can’t have, say, terrain that deforms… but there are ways around it, for instance, using an interactive actor separate from the terrain for when you need a crater to appear. And you could probably also implement a vegetation system allowing for stuff like grass that blows in the wind, trees that fall over, etc.

    I also want to take the leap and say that Tech 5 might be ideal for something like a strategy game or top-down RPG, where fully static worlds don’t matter. With MegaTexture you could still build levels tile-based like the NWN games, but then could overlay far more detail on top and get rid of the monotonous look that snap-together levels have. Limitations in texture resolution wouldn’t matter from a zoomed-out perspective either.

  17. LintMan says:

    If you've got a small group of people who have some skill at task A, and it's making money, then don't kill A so you can have 5% more B.

    Businesses unfortunately do this all the time. From what I can tell, it’s all about “Return On Investment”. If your resource/monetary investment in project X gives you a return of $Z, while an equivalent investment in project Y gives you a return of $(5*Z), on the face of it, any investment in X is essentially LOSING you a potential $(4*Z). Of course there’s more factors than that, but that’s the gist of it.

    Imagine if there’s a beach with buried money. In one area, each person can dig up $25 per hour. In another, rockier area, you need 4 person teams, but they can dig up $500 per hour. Now lets say due to other constraints you can only bring 12 people to the beach. Where do you assign your people?

    1. Shamus says:

      See, I can respect that. If it was all about ROI it would at least make decent business sense in the short term. But it’s pretty clear that nobody knows or can predict the ROI on a AAA game. Giving up predictable small income for uncertain income slightly sooner just doesn’t make sense, short or long term.

      1. Taellosse says:

        It does in the gambler’s mind, though. And the senior executives of major corporations are all trained to think like gamblers, because they all operate in the environment of the stock market, which is, fundamentally, epic-scale gambling.

        And the unfortunate truth is, if the ROI on a breakout hit is sufficiently massive, it actually can underwrite many expensive failures, and still leave a considerable profit in the bank. Which is the entire model on which Hollywood has been functioning for ~40 years. Particularly if you only actually have huge losses occasionally, and the rest of the time you break even or only make moderate gains, despite spending massive amounts.

        Both Hollywood and the games industry (not to mention Wall Street as a whole) demonstrates that, if you’ve got sufficiently massive numbers, a gamblers mindset actually can be quite profitable, most of the time. It can only be even semi-sustainable when employing large enough numbers, but when you do, it works, in a drunken-boxing, stumbling-into-success-by-accident kind of way.

        1. stratigo says:

          That and executives actually don’t care about the company since they are already swimming in money. It is all about making themselves look good, and a steady return does not look good, but a huge blockbuster does. One successful blockbuster and a dozen failed ones is worth more to the image of the executive then a thousand cheaper steadily profitable games.

          And this translates to pretty much every industry.

          1. Sabrdance (MatthewH) says:

            At the risk of invoking one of the banned subjects – one of the major lessons from the financial collapse of 2008 is that no one, not even the risk specialists, understands risk. These guys were paid huge amounts of money to model high risk/high yield financial instruments and every single one of them botched it in highly complicated and also different ways.

            It isn’t that they are gambling -they think the risks are calculated. We do six blockbuster AAA games, so long as 1 pays off we do better than six medium-performers that all pay off. The odds of all six games bombing are less than the odds of all six medium performers paying off. Therefore, AAA blockbuster. This is the Hollywood math and the financial sector math, too.

            But no one understands the risks because we haven’t done this before. And then we get something like summer 2012 which killed 3 or 4 blockbusters right out of the gate. The lesson we’re probably going to get is “OK, copy The Avengers” rather than “don’t spend $200 million on John Carter/” The reason we’re even here is because “even Waterworld made money.”

            This has persuaded me of something I read in Wisdom of Crowds. We can make good decisions by following the crowds, but only so long as not everybody is doing it. Someone has to be the Berkshire-Hathaway or the Valve which is trying to make actually better mousetraps.

  18. Amarsir says:

    I’m just thinking out loud here, but … it seems we treat gamer spending as “$ is fixed, maximize play time at minimum entertainment level.” But what if it’s actually “play time is fixed, minimize $ at minimum entertainment level.”?

    The report came out today that retail video game sales in the US are down for the 8th month in a row. July was 20% below July 2011, after June was 29% lower than the previous year. Mobile / social games are blamed (as is digital distribution), and the next gen consoles are heralded as the next best hope. (Yeah.)

    If it’s true that people aren’t buying Shootguy 4 because they’re content with whatever indie Zynga ripped off most recently, then maybe blockbuster development is actually correct. Console releases can’t compete on price with Angry Birds, even on older engines. But creating a higher quality (-looking) game, something unmatched by older releases or mobile platforms, might give a unique advantage. And perhaps customers don’t really care if they pay $2 or $60 but demand high standards of a game they can’t carry around.

    (Probably untrue given that this was already happening before the mobile/social juggernaut, but still something to consider.)

  19. Lalaland says:

    While we’re still having the engine tech arguments Crytek were good enough to release another Cryengine 3 trailer. You have to appreciate the power of TSTTT!!!

    More seriously this is almost a hit list of what the next generation of console hardware could bring mainstream. While built for a game that in all likelihood will bore me to tears (unless it gets back to the Crysis 1 sandbox play style) all of these features could make for better games. Even something like ‘Pixel Accurate Displacement Mapping’ could be utilised to give a more ‘real’ feel to the various McGuffins adventure games rely on. Even stylised art is currently restricted by technology to flat surfaces with varying levels of shine. With this why not a game where every character is flat grey but is uniquely identified by the texture imparted by a unique bump map?

    Except “Composite 3D Lens Flares & FX” that can go die in a fire, why do engine developers think we want to play games as if through a dirty porthole? Even though I’m trying to play BF3 singleplayer for ‘teh grafix’ the constant ‘dirty lens’ effect is making me grind my teeth. I know that’s a shallow reason but I’ve just gotten my first new gfx card for 4 years and I have a weakness for shiny.

    1. psivamp says:

      I was a bit annoyed by the lens flare segment as well. I spend most of my waking life looking through dirty lenses, I really don’t need my video games to expensively recreate that for me.

      While on the one hand, this engine allows things to be very pretty and complicated, I don’t necessarily see it as necessary. Crysis doesn’t need photorealistic tree roots — NatureSimulator2013 does, but that’s not real and has almost no market. I think there’s too much emphasis on the shinies. I don’t need it. I can play games at 800×600 on my laptop — I’d like to be able to increase the resolution and push the draw distance further out a bit in Tribes so that I could meaningfully play Sentinel, but I can play most games at 800×600 and not feel like I’m missing out too much.

      1. Lalaland says:

        I would argue that with technology like this the journey can become as interesting as the destination. I’m an absolute sucker for grand vistas in computer games and the the thought of cresting a hill to see a lush jungle carpeting the horizon as gathering storm clouds race across the valley floor excites me. A lot of the reason for Urban Brown in games is that Urban/Industrial settings are a natural fit for hard flat angled surfaces whereas nature can do almost any shape but hard and flat. With tech like this we might finally move away from boring urban dystopias (of course as is usually the case we’ll soon wind up in boring rural dystopia but hey at least it’s a change).

        One of the things that consoles have caused to rot is the Settings menu. Once it was a wonderland of obscure settings that you could tweak to your hearts content to arrive at the framerate/pretties balance that suited you. These days you’re lucky to get 16:10 resolutions let alone anything as fancy as LOD adjustment tools.

  20. Ish says:

    While I don’t particularly agree with the way it works either, there are some clear misconceptions about the basic business model that large publishers like EA work upon.
    For them, moderately successful games are the absolute worst-case scenario.
    Games that do extremely well bring in a great deal of money and greatly help the company.
    Games that do poorly? They can write them off and pull themselves in some nice, big tax breaks.
    Games that do merely moderately well? Well… those games can’t be written off and they don’t make the kind of money the company wants. They’re just bringing in more taxes and whatnot on the big, successful games they love.

    This article has some discussion on it that I’d found interesting

  21. foo says:

    Are you going to further transcribe the keynote? You’ve got a lot of response and I really enjoy reading instead of watching the video.

    1. Rilias says:

      Good Question.
      Is the transcribe dead Shamus? Am I supposed to watch the keynote all alone without your supervision and tutelage? What if I get confused and scared?

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to krellen Cancel reply

Your email address will not be published.