Experienced Points: With Great Power…

By Shamus Posted Tuesday Jan 21, 2014

Filed under: Column 83 comments

So the next-gen consoles are out. Let’s talk about what we can do with all of that processing power without going broke making hyper-realistic graphics.

So that’s the article. Now let’s go on a tangent. At one point in the article I said…

If you’ve got even a mid-range computer with a decent graphics card, then your computer has more processing power than every computer that existed before the year I was born. (1971) That’s including the supercomputers built by world governments and all the computers involved in sending humans to the moon.

How I arrived at this conclusion:

There’s Moore’s Law, which colloquially is that “computers get twice as fast every two years”. That’s not exactly what the man said, but it’s close enough for most discussions and is a good rule of thumb when measuring performance. He was actually talking about transistor densities, and while more transistors = more power, the relationship is not 1:1 and there’s plenty of room for haggling over what “performance” means.

In any case: It’s been 42 years since I was born, which means things have “doubled” 21 times. 221 is 2,097,152 which means that your computer is supposedly that many times faster than a 1970 computer. Were there 2 million new computers built in 1970? I don’t think so. These were the days of “big iron”, when computers cost millions of dollars and were only owned by large entities. Sales were probably in the thousands or tens of thousands.

We can add in all the computers before that point, but we have to keep in mind that as we go backwards computers continue to get slower. So a computer from 1967 would only count as ¼ of a “1971 computer”, a 1965 computer would count as 1/8 and a 1959 computer would only be 1/64. Adding the computers this way, I think my original (admittedly hyperbolic) claim is true enough: The average desktop computer, if magically transported back to 1971, would be the envy of world governments and would be able to out-perform all the others combined.

I tried to work out just how much more “powerful” that computer would be, but there are so many incomparables. Sure, clock speeds have gone up by 2,097,152, but that’s not the whole story. We also have L1 and L2 cache. Memory isn’t just faster, it’s also larger, increasing how much stuff you can work on before you are obliged to go to the hard drive. Hard drives are orders of magnitude faster, plus they usually have some level of cache of their own. What about the RIDICULOUS processing power of your graphics card? That’s a whole bunch of extra processors that aren’t even included in that 2,097,152 figure.

So the question of “how much faster is it” kind of depends on what you’re trying to do. If you’re doing something completely linear like looking for primes or calculating pi, then your computer will go roughly 2,097,152 times faster. But what if we’re doing something that requires a bunch of memory? What if we’re looking for patterns or repetition in pi? Once the number of digits goes above the memory limit of your typical 1970 computer the speed differential is going to skyrocket as the old computers have to write and read from disk. You could even say it will skyrocket yet again when we hit the limits of the hard drives of the day, since then you’d need to have a bunch of interns running around swapping reels of magnetic tape or whatever.

But even this comparison ignores the GPU power. What would the speed difference be if, instead of calculating pi, we were trying to run Borderlands at 1600×900? We’d have to hand-wave the fact that it’s physically impossible to run the game on those old machines. We could abstract it out and say “how long would it take a 1970 computer to render the typical Borderlands frame?”, which is a bit more comprehensible and lets us ignore stuff like OS, drivers, input devices, etc. Now the problem is simply a matter of reading in a fixed number of polygons from disk, processing them, and saving the resulting image out to disk again. (No point in trying to display the image on the monitors of the day. They don’t have the resolution or color depth.)

This seems like a good thing to produce with millions of dollars of specialized computer equipment.

Going back to our 1:2,097,152 speed differential: If it takes us 1/30 of a second to render a frame of Borderlands then it will take the scientists of the past in the ballpark of 20 hours of raw processing to make it happen.

But wait! It’s worse!

That’s assuming it’s just one modern processor vs. a single 1970 processor, which VERY much not the case. One modern processor is 2 million times faster than one from 1970, but your graphics card has, what? Dozens of cores? It depends, and I don’t know enough about the core counts in modern cards and how those counts have changed over time. In any case the graphics card is technically a whole pile of computers that have been perfectly optimized for this specific task.

Perusing Wikipedia, it looks like if I treat your typical GPU like 15 regular processors I can run these numbers without being accused of unfairly stacking the deck against the machines of the past. So we’ll think of your computer as 16 CPUs, your GPU+CPU. So your computer isn’t 2 million times faster at this task. It’s 33,554,432 times faster. The processing won’t take the old computer 20 hours, it will take 13 days.

But wait! It’s worse!

The “20 hours / 13 days” figure is only true if they have unlimited memory, which old timers will be happy to tell you was never the case. For the purposes of this comparison, let’s give the people of the past a break and let them use the processor and memory from the 1975 Altair 8800, with whatever industrial-grade hard drives were available at the time, if only because that makes things easier on me. We’ll assume they’re saddled with the throughput and seek times of the day, but their hard drives can be as big as they need to be so we don’t have to run the legs off our interns hauling truckloads of tape drives around.

The high-end Altair had 8k of memory. Your average texture map in Borderlands is probably 5122 pixels, with each pixel requiring 4 bytes of memory. So when it comes time to render (say) Brick‘s face we need to get 1,048,576 bytes of data into our 8,192 bytes of main memory. That’s obviously not going to fit. What we’d have to do is give up painstakingly read in the first 8,192 bytes of the texture, render as many pixels as we could with it, then completely purge main memory and read the next 8,192 bytes from disk. Repeat that 128 times. Awesome. That’s one polygon.

(We’re ignoring mip maps and antialiasing, which would make this task much harder. We’re also ignoring the fact that we’d have less than 8k to work with, because the rendering program itself would eat it all up.)

But wait! It’s worse!

Where are we storing the final image? It’s 4,320,000 bytes, which is five hundred times larger than our main memory, which is already filled with input data anyway. Ignoring this (or spotting the old computer another 8k of memory just to be sporting) I suppose we’d have to read in the final image one block at a time, draw the polygons that touch that section of the image, then save it back to disk and load in the next block.

In this scenario, it hardly matters what the processor speeds are. Yes, it will take excruciatingly long for the old computer to transform and light those polygons, and longer still for it to calculate those color values. But that processing is a really trivial part of this project. Since we’re doing the final image a single 8k block at a time, the whole thing is stupid and impossible would need to be rendered about five hundred times. We’d load in a block of image, process everything, save that block out to disk, then load in the next block and repeat. And keep in mind that each round of processing requires us to process each polygon by drawing it a little bit at a time and then swapping in the next block of texture data. The 13 days of processing would need to be repeated hundreds of times.

We could eliminate bits of the “repeated hundreds of times” figure by throwing away polygons when they fall outside the current block we’re rendering, but it hardly matters. The processing is nothing compared to all that disk I/O. It’s hard to find data on how fast (or rather, how SLOW) drives were in 1970. Even this drive from 1991 looks pretty grim. The closest I could come is this chart, which only goes back to 1979 and suggests that a 1979 drive could move about half a megabyte a second, with a seek time (according to this Wikipedia article) of 100ms or so.

Every single read from disk is going to take ~115 milliseconds even with that 1979 drive, and we need to read 128 times. Which means we’re spending fifteen full seconds per polygon just on disk I/O. Plus the time it takes to do the calculations. Multiplied by the number of polygons in the scene. Multiplied by the number of passes it takes to render it in 8k blocks. Oh, and the polygons themselves don’t fit in memory either, so we’ll need to read those in and out of memory as well.

Was it worth the wait?

My calculator says the job would take just over 16 years. Computers would double in power eight times while the job was running. If they began the job in 1970, then sometime in March of 1986 the scientists could put down their Rubik’s Cubes and check out the completed image. I suppose they’d have to print it out, since computer displays still wouldn’t be advanced enough to display it.

 


From The Archives:
 

83 thoughts on “Experienced Points: With Great Power…

  1. Gravebound says:

    “I suppose we'd have to read in the final image one bock at a time”

  2. Axehurdle says:

    I’m glad I read this pointless hypothetical. It really made my day.

  3. Lame Duck says:

    “My calculator says the job would take just over 16 years.”

    Even longer and almost as pointless a task as the development of Duke Nukem Forever.

    1. lethal_guitar says:

      Very true! If thas was on Facebook, I’d click like now ;)

  4. Kamica says:

    I love technological development =P.

  5. Irridium says:

    All that power, and I use it to play a game where you’re told to kill “bonerfarts”.

    This is to say nothing of using the internet, a creation that houses all of human history and knowledge, to look at cat pictures and occasionally pictures of naked people.

    1. swenson says:

      Technology is pretty sweet, innit?

    2. Halceon says:

      So unclassy! I use it to watch videos of cats and occasionally of people getting naked.

    3. ET says:

      There was a Ted Talk a few years ago, where a guy explained why you shouldn’t fight this.
      I can’t remember the title, but the argument basically boils down to:
      You cannot have productive uses of technology alone, without “trivial”, or “useless” things, like pictures of cute cats.

      Sure, you’re using this amazing lump of technology on your desk to do silly things, but your productivity on useful stuff is orders of magnitude better than without that technology.
      For example, how long would it take a 6th-grade student to write an essay/book report, spell-check it, and put on the final touches, compared to doing the same thing back before personal computers?
      Just the effort in re-typing or re-hand-writing it after each time you read through it for typos would be a pain in the ass.
      Wait, hold on.
      This sounds exactly like me in that grade, before my dad bought me my first computer (with a printer). :)

      1. Tizzy says:

        The downside is that what is considered an acceptable 6th-grade report is a target that moves always higher and higher.

        1. ET says:

          Better educated 6th graders means that we all get to have an increased quality of life!
          Just think of all the cool gadgets people like my nephew will invent, since the gadgets they use to do their jobs, are things which were science fiction when I was his age.
          Heck, some things weren’t even science fiction yet when I was his age!
          (about 28 years ago; he’s almost two :)

      2. Retsam says:

        I don’t know if I’d agree with “we shouldn’t fight the trivial uses of technology”. Of course, I’m not advocating that we try to modify the technology; i.e. produce computers that replace all cat images with spreadsheets.

        But it’s how society uses the technology that I really think could use some improvement, and changing our society’s general perception of technology is well within the realm of possibility. The problem isn’t that you can watch stupid videos on YouTube, but the proportion of people who use the internet to watch stupid videos (>95%) is so much greater than the proportion who use YouTube to learn something (not even going to guess a percentage).

        1. Axe Armor says:

          I suspect that even Internet stupid people are smarter than pre-Internet stupid people just from how much smart is constantly whizzing by their heads. Even pointless cat videos sometimes link to clickbaity science in the Youtube recommended videos panel.

          Also consider: “You won’t believe these 5 things celebrities actually urinated on!” is garbage that previously had to be delivered to stupid people by magazine, and which the Internet is capable of delivering at vastly higher speeds and volumes. It’s possible that the brains of stupid people may be improving memory and comprehension in order to keep up with this deluge of saturated shit.

  6. MichaelG says:

    I was a summer student at IBM in 1976, and worked in a graphics group. I know we had a single high-res color display, but I don’t think it was 24 bits per pixel. Maybe 16?

    I think the resolution was either 256 by 256 or 512 by 512. This was in an ad-tech group. That display hardware was incredibly expensive and included a large box (size of a desktop PC now) with several huge memory cards (10″ by 10″) and loud fans.

    A couple of the guys were messing with ray tracing and made a short movie (perhaps 3 minutes?) They presented their results at a big graphics conference (SIGGRAPH.) The best they could manage was a couple of spheres and blocks moving around, at probably 256 by 256. This was produced by running a mainframe computer for days, writing one frame to video tape every few hours.

    So yes, you kids are spoiled rotten. Get off my lawn!

    1. Alexander The 1st says:

      Let me guess; the crowd at the conference gave a standing ovation and said “How did you manage to do that? That’s amazing!”? :p

    2. Shamus says:

      Man, 1976? I’d been there I would have been 5. And I would have thought it was the greatest thing ever. Heck, I went bonkers over stuff that superficially LOOKED like a computer.

      1. MichaelG says:

        I thought it was great. After all, I’d grown up reading science fiction. I was never going to get to play with a spaceship, but this was a Computer I could use myself! Like a dream come to life.

      2. Mechaninja says:

        … like the movie Tron, for example, which I went to great lengths to explain to my brothers while we were in line about how they rendered things on computers.

        I had no idea what I was talking about of course.

    3. Chaz says:

      I love the concept of writing out a frame at a time to video tape… My equivalent story is from when I went on a school tour of Oldbury nuclear power station in the UK. They took us through their computing centre, which was running (I think) some Perq workstations. To impress us, one of the operators/programmers nonchalantly fired up a graphics program that drew a multitude of rapidly changing rectangles on the screen. Raised on ZX81s (and maybe I had my ZX spectrum by then) I was bug-eyed with astonishment at how quickly it ran. And then they showed us the Winchester drives, and I was even more excited. For years and years I lived in awe of the memory of these beasts, and I remember well the astonishment of realising that my current computer (then) was more powerful than one of those beasts. I think a minecraft redstone computer would probably be faster than a Perq now…

      1. Steve C says:

        > a minecraft redstone computer would probably be faster

        Oh man. It’s pretty amazing when you think about it. A computer built inside a computer by a kid is faster than a multi-million dollar computer of yesteryear.

      2. Nova says:

        The internet is a WEIRD place and I am going to guess there is a really high probability that you went to the same primary school as me (or one nearby that was very similar) – a C of E one near the Severn? I went on exactly the same trip in the late nineties and they didn’t show us any of the cool stuff :c I just remember squabbling because we were only allowed to spend Ă‚ÂŁ1 in the gift shop and someone tattled on me that I spent more :/

  7. swenson says:

    Posts like this remind me why I’m a CS major all over again–because computers are seriously cool machines.

  8. This article reminded me of a Filk-ish song by Frank Hayes.

    “And it’s cheer up me lads, let your hearts never fuss
    When you’re integrating systems for the S-100 bus!”

    (link goes to a YT version of the tune)

    1. ET says:

      That buss sounds freaking awesome!
      Numerous, glorious pins; Everywhere!

    2. Angie says:

      Whee, another filk geek! I used to have that one on a cassette tape album-ish sort of thing, although I forget which one. :D

      Remember “The Programmer’s Alphabet?” Either Jordan Kare or Steve Savitsky, I forget which:

      A is for ASCII, our alphabet’s name
      B’s for the bugs for which we get the blame
      C’s the computer which never works right
      And D’s for debugging, all through the night….

      Angie, humming

      1. I’ve got several Leslie Fish tracks on my mp3 playlist as well as Jordin Kare:

        For the engineer sighed as he studied those plans
        And he read the demented designers demands
        Then he called in his techs and he said to his crew
        This guy seems to think that there’s jobs we can’t do
        And parts we can’t build so let’s give him a thrill
        We’ll build his machine and then send him the bill

  9. Shamus, regarding the “better A.I.” concept: Has any game tried integrating the evolutionary methods computers have been using in designing 3D forms/models or solutions to tasks? I’m not 100% sure on the methodology, but as I understand it, the computer will try out various solutions, take the most successful ones and “breed” them by merging or hybridizing their methods/structure/whatever.

    With infinite room for improvement, this could lead to nigh unbeatable ‘bots in an FPS, but there are other things for NPCs to do in a game other than combat, which could lead to intriguing results.

    Coupling that with (optional) feedback to the software company that made the game where they can analyze what all of the players are doing to create new strategies for the AI to follow might be a good next step in improving the basics of game AI. Maybe?

    1. Taellosse says:

      Techniques like that already HAVE led to nigh-unbeatable ‘bots in FPSes. AI researchers can design ‘bots that play better than any human fairly easily now. They’ve even accomplished the somewhat harder task of making ‘bots that can fool other players into thinking they’re human players (without voice chat, of course) – a sort of sharply limited Turing test. The number of variables to track in your typical FPS is not that large, really, and training an AI to master them is, while not trivial, well within current capacity now.

      1. postinternetsyndrome says:

        That reminds me of that thing where a guy claimed to have forgotten a bunch of self-improving bots on a quake 3 server for several years, and when he remembered about it and checked on them, found that they just stood still, spread about the map. He first thought that they had just crashed, but the AI was running. They had just eventually – after years of getting better and better at killing each other – found that the optimal tactic was to not fight. Very poetic, and probably not true, but it’s a great story.

          1. Anorak says:

            While it’s an amazing story, I think that what had actually happened is that the AI log files (where all the learning experience was kept) could never be larger that ~512 mb. Once it filled up, the bot would stop being able to make any new decisions.

            I have no sources for this. It might not be true.

            1. LazerBlade says:

              But those files last date modified was the date the screenshot was posted, so either the AI were still updating, or it was an amusing and nearly elaborate hoax.

              1. postinternetsyndrome says:

                I googled on, and it turns out it was just a joke. The files were fakes etc. A good joke at least. :)

                1. ET says:

                  Way to go!
                  Now the picture which I prepared over lunch is useless! :P

                  1. ET says:

                    Gosh dang it!
                    I screwed up the link! :S

                2. Tizzy says:

                  Since you’re busy spoiling Santa Claus stories for everyone, here is another one which was too good to be true, from the Lord of the Rings movie AI. (Which, incidentally, takes us back to the beginnings of Shamus’s internet fame…)

                  Lord of the Rings terror: It was just a software bug

                  Snif! :-(

      2. Kronopath says:

        I’d love to see a source for this where I could read more. I know some people who have tried (mostly unsuccessfully) to do exactly that for research purposes.

    2. ehlijen says:

      Do you even need this to make unbeatable AIs?

      Simply the fact that bots can have effortless perfect inputs (where humans have to make precision clicks to hit the enemy FPS targets), are never distracted by anything on the screen (meaning camouflage against a bot is impossible unless it’s specifically programmed in) and have a reaction time advantage (info doesn’t need to go through the eyeball->brain->finger obstacle course.

      Making a fun AI to play against already involves artificially limiting bots in those regards.

      Now that doesn’t mean it’s good AI (far from it), but it does make it hard to beat.

      1. ET says:

        AI for checkers has been unbeatable since…the 80s?
        Yeah, playing against “proper” AI, which is just programmed to learn/win at any cost, isn’t fun.
        I mean, it’s basically like playing against a more feeble Skynet.
        At best, you win today, and at worst, you’re already losing.
        What’s much more productive, is making bots which are fun to play with/against! :)

        1. Paul Spooner says:

          Quite so. We should be focusing more on AI as allies than purely as opponents. Of course, this would quickly expose many of the “games” we have these days as trivial time-wasters which are much better handled by computers than humans. Do we really want to know the truth, or instead revel in our ignorance? I guess we’ll find out.

      2. Tizzy says:

        That’s where the OP might be onto something: creating dangerous AI is easy. Creating interesting, lifelike, AI that has a non-murderous purpose, OTOH, might benefit greatly from these.

        Ultimately, enemy AI is trickiest because it has to be dumb but pretend it isn’t. And just the right flavor of challenging too…

      3. I didn’t mean to imply we needed unbeatable AI. I was thinking more about response to player action in general. In fact, it occurs to me that the AI could be concerned with providing a challenge rather than “beating” the player, and the evolutionary behavior is set to “matching” the player’s actions and strategies.

        Maybe our future games can learn how to use or circumvent player exploits. An RTS might be broken if a player uses a kind of zerg rush strategy… the first go-round. In future games, it might not be effective in the least.

        So my point is having AI that learns how to keep the game interesting, not one that learns how to eventually achieve a kind of god mode on its own.

  10. guy says:

    So there are people working on an RTS where you can control 5000 independent units in battle at once. I, uh, cannot imagine that being a good idea.

    1. Halceon says:

      To control that you would need a team of highly pressured genius children raised in a fascist environment and trained as soldiers.

    2. 4th Dimension says:

      What you absolutely need are smart subcommanders. Smart AI subcommanders, that can be issued simple orders or complex problems, which they will then look into, and taking into acount what we know about the enemy and of our own forces they will try to accomplish the challenge. And on top of it all they need some way to intuitevly relay reports back to us, so it’s obvious to commander what his AI subordinate is doing (like he isn’t sitting on his but confused buthe is preparing an ambush). And you might need different levels of AI for let’s say ship capitan AI (decides how to defend the ship, use it’s abilities and such) and comodore and higher AI.

      And on top of it all the player needs to be able to best the enemy AI.

    3. Presumably they’re only sort of independent. You’d tell them to be in formations and stuff. But they wouldn’t be just clones of each other’s actions moving precisely in unison, they’d dodge independently, pick local targets to shoot at, interact separately with explosions, debris, obstacles and so forth.

    4. Think of it this way. You “command” 5,000 soldiers to attack somewhere. In this hypothetical game, combat is run like the “Massive” engine that was used to run the battles in the Lord of the Rings movies where terrain, speed, weapons, etc. were factors in the outcome of each combatant.

      With less computing resources, your battle is like a RISK boardgame. The dice and plastic bits represent thousands of soldiers and every action they take.

      It’s the difference between a two-sided army where you can get an outcome based on over a million different actions happening simultaneously vs. a coin flip.

  11. Taellosse says:

    Could they even print it out at good enough fidelity? I know it was possible to make pretty good printed art, of course, but was there any sort of printer that could be hooked up to any sort of computer that could do this? The good quality printing tech of the early 70s was all analog, wasn’t it? No computers involved, and thus, no way to export the image from memory onto any sort of physical medium.

  12. Nick Pitino says:

    In terms of computing capabilities, these days you also have ASIC’s.

    So for someone who’s mining Bitcoins or folding proteins or what-have-you the difference between now and then is only that much more comically ridiculous.

    Moore’s Law is part of why I can’t really get myself too worked up over the graphics treadmill, with a rough doubling every two years or so then sooner or later it WILL be possible to make photo realistic games with accurate physics and the like.

    A few years after that even lower-end poorly optimized tools and systems will be able to do it.

    Then the graphics treadmill will be over, developers can make games with whatever level of graphical fidelity or style they want, and we can all move on.

    Roundabout the same time it’ll probably also be possible to start simulating the brain in hardware and the raw computing power available will make sorting out things like the epigenome possible, thus unlocking the wackier possibilities of bioengineering.

    At which point the entire planet will go totally bugnuts insane.

    I personally can’t wait.

    1. EmmEnnEff says:

      Alas, I’m afraid that the gravy train is over. [1]

      For the first half of 21st century, we’ve been eeking out performance improvements by making transistors smaller, and smaller. And then we stopped. Because, as it turns out, when you make them as small as we do today, quantum mechanics ruins your party in a very nasty way.

      So we’ve figured – instead of making our circuitry smaller, why not add more of it?

      And then, we spent the second half of this century adding cores. So, we’re up to 64-core GPUs, which suck 300 watts of power, and good golly, if we cram any more cores on that thing, we’d have to keep a tub of liquid hydrogen on hand to keep it cool.

      “But what about ARM chips? You know, the ones in your phone, that suck less then a watt of power?”

      There’s a reason those are energy efficient – they are not very performant. On the spec sheet, my phone’s more powerful then my desktop of 10 years ago. In practice, it would kneel over and catch fire if it tried to run Half-Life 2.

      But surely, engineering grit can get us performance gains!

      There’s a problem with that too. R&D costs have almost been keeping up with Moore’s Law – as have the costs of building new chip fabs. And the economics of dumping 10 billion dollars into a chip fab (And another three in R&D), when PC sales have stagnated just don’t make any sense. Intel recently abandoned a half-built chip fab – the economics of finishing [2]

      If I were to bet on cheap photo-realism, I’d bet against it.

      [1] http://mprc.pku.edu.cn/courses/organization/autumn2013/paper/Moore's%20Law/The%20End%20of%20Moore's%20Law.pdf

      [2] http://articles.timesofindia.indiatimes.com/2014-01-15/hardware/46223354_1_intel-corp-chuck-mulloy-chip-plant

      1. Steve C says:

        Are you saying Moore’s Law will fail due to the limits of transistors? If so, I strongly disagree. Moore’s law hasn’t been true for years. It hasn’t been true for decades. Moore’s law has been true for more than a century. It has been true for not only transistors but for every form of information technology since the 19th century. It will continue to be true when transistors go the way of the vacuum tube and when that technology is replaced by yet another new tech.

        Moores law will technically run out in the 2020s just because it specifically mentions transistors. But the concept of increasing returns over time will continue. The expanded Moores law (Kurzweil’s Law) does not pertain to feature size or clock speed but to calculations per second per constant dollar.

        1. EmmEnnEff says:

          Here’s the problem – ‘Some new technology to replace transistors'(supposing it is physically possible) isn’t around the corner – yet the cost of building newer, faster transistors is prohibitive – today. Its a low margin business, where the cost of R&D has gone up as quickly as chip speeds have. So far, growth in volume has kept the bubble afloat – and with slumping PC sales, and just about every other bloke on the planet already owning a computing device, nobody is putting down capital to build more high-end fans.

          I’m quite familiar with Kurzweil – and to speak of the man charitably, for the past 2 decades, his sole claim to fame is that he is a pop writer. And he certainly didn’t foresee a ‘Moore’s law of rising chip fab costs’ or a ‘Moore’s law of rising clean room costs’, or a ‘Moore’s law of a dropping number of competing chip suppliers.’ – all of which are what’s killing the long-term outlook of the industry.

      2. guy says:

        The gravy train still has several years left in it as far as transistor size goes. Our current problem is power use and heat dissipation, which is why we’re going multicore. The electronics are still getting smaller, so we can fit four cores into the same space we used to fit one into.

        Also, comparing ARM chips to other chips is much more complicated. ARM chips have greatly simplified internal architecture, which makes certain types of tasks much more time-consuming and their assembly code annoyingly hard to write. So for certain tasks they’re slower than an analogous standard chip. But in practice the reason you couldn’t run Half-Life 2 on your phone is that it doesn’t have a graphics card.

        Graphics cards use numerous extremely specialized processors and large caches to do simple calculations in parallel very very fast. This is important because graphics involve several calculations for every one of thousands of polygons even before you apply effects. If you need to do your graphics work on a CPU, you’re going to have a very bad day.

      3. Richard says:

        You’re more than an order of magnitude out for GPU core count.
        My GPU is a year old and wasn’t the fastest when I got it.

        It’s got 960 cores, clocked at 1GHz.

        Current ‘top-line’ graphics cards have about 2800 cores – eg the nVidia GTX 780 Ti with 2880 cores @ 900GHz, AMD’s R9 at up to 2816.

        And you can pair (or more) them up…

        It still makes me shiver.

        1. Volfram says:

          I thought 64 cores sounded more like a CPU than a GPU. Thanks for confirming that.

          What’s even more interesting is that given your card wasn’t top-of-the-line when you bought it, it probably has 2-4x as many cores, they’re just disabled.

  13. Ben Deutsch says:

    Incidently, as to your last paragraph (second sentence), I recall someone* having calculated that if your calculating process will take longer than 27 months (or so?) to finish, wait until Moore’s Law makes it take less than 27 months, and start then. You’ll be finished faster.

    *(Sorry, can’t find the original :-( )

  14. Mephane says:

    This discussion reminds me of this article about calculating googolplex. In essence, it says that running a program capable of printing the number googolplex (10 to the power of 10 to the power of 100) on the screen would be futile, because as long as computing power keeps increasing as it has, for the next few hundred years every computer running the program would be overtaken by a faster computer that starts later.

    What’s funny is that the very same conundrum exists in (hypothetical) interstellar space flight. Any interstellar spacecraft, even if just a probe, might very well be overtaken by a faster and better craft launched at a later date. The earliest colony ships are thus likely to find their destinations already colonized by humans who started later with faster ships.

    1. Mephane says:

      *wishes for a return of the edit function to properly close the anchor tag*

      1. Shamus says:

        Fixed. Thanks for the link. That was amusing.

    2. Nathon says:

      Arthur C. Clarke predicted that Voyager will end up in a museum some day.

    3. Mmm . . . although I haven’t noticed in practice nearly as great a development in the capabilities of spacecraft as of computers. To the extent that their capabilities have improved, it’s been particularly in the same direction as computers: We’re getting better at making littler ones. Lift-and-throw is about what it ever was. Some lighter weight materials and other weight savings, but the core propulsion, not much different.

    4. Ben Deutsch says:

      Privateer 2 had this, I think, at least in the “lore” department. One of the planetary systems there was colonized by a slower-than-light fleet – after it had already been colonized by a faster-than-light colony fleet that had been sent out a lot later.

      According to the planetary description pane, the late-to-the-party group was “not amused” ;-)

    5. Dave B. says:

      “The earliest colony ships are thus likely to find their destinations already colonized by humans who started later with faster ships.”

      This was the basic plot of a short story I read several years ago, and I just now managed to dig it out of my book collection. It was Far Centaurus by A. E van Vogt, and was featured in a book titled Starships: Stories Beyond the Boundaries of the Universe.

      So I guess we should delay the colony ships until they can make the trip in less time than it takes to design and build a better ship :)

  15. Phantos says:

    Even just skimming that made me mad. Mad at how much potential the industry is wasting.

    All of this power at our fingertips, that our predecessors could never even imagine. And what do we do with it? What do we have to show for it?

    Cover-based shooters and EA’s SimCity.

  16. Infinitron says:

    It’s true. Even simple Photoshop-level image processing wasn’t really computationally feasible until the mid-1980s, let alone realtime rendering.

    1. Volfram says:

      What’s more, most modern graphics cards actually hold 3 or 4 screen-resolution 4-byte-depth images at a time.

      The framebuffer, which is what is read from to display to the screen

      The backbuffer, which is the frame currently being drawn and will be switched to the framebuffer at the next frame update

      and the Z-buffer, which stores a depth map of the entire frame in 32 or 64-bit floating point.

      Z-buffers are another one of those techniques that make the gap even larger. Back in the PS1 era, all polygons onscreen had to be depth-sorted, manually culled, and then drawn from back to front. Obviously, this is slow. By the time the PS2 era rolled around, and after the technique had been developed, most graphics cards and consoles had enough onboard memory to set aside and store a Z-buffer, which means now sorting only HAS to be done for transparent polygons, and sometimes not even then. Also, culling is done almost for free.(“Is the depth of this new pixel less than or equal to the old one? Nope? We’re done here.”)

      Likewise, that backbuffer was one of those things Sega did that “Nintendon’t” and was what allowed the Sega Genesis, with roughly half the computing power, to FEEL faster than the SNES.

      1. Mark says:

        Although, once pixels become expensive enough to draw thanks to omnidirectional subsurface bling-mapping and all the rest of our brilliant modern graphics innovations, you might want to go back to sorting them again — just front to back, instead of back to front! That way the Z-buffer prevents the maximum amount of overdraw.

        1. Volfram says:

          As a matter of fact, that is one of the techniques used to improve frame rate through “occlusion culling,” and I actually have a switch in certain constructs for my game engine that will attempt to build polygons back-to-front or front-to-back.

          You can be a little careless about it, though, which also makes the sorting cheaper than it used to be. All the advantages of both, with the disadvantages of neither!

  17. Lazlo says:

    I think you’ve made a mistake about Moore’s law, my understanding is that it isn’t about the speed (or transistor count/density) of “a computer”, but about the relationship between speed/count/density and price. So a nice high-end $2K computer from today isn’t two million times faster than a million-dollar mainframe of 1970, it’s a million dollars faster than a $2K computer from 1970 (which would be a reasonably fancy electronic calculator. According to this, a Friden EC-130 cost $2100 in 1964, and was a pretty basic calculator) I *think* that makes it a little easier to calculate, given that $2k modern desktop, a computer with that capacity 42 years ago would have cost about $4B. So it’s more along the lines of the question: did companies and governments spend more than $4B on computer hardware back in those days?

    1. ET says:

      The thing is, Shamus was doing some quick, dirty calculations.
      Even if he’s off by a factor of 100X in the price of the computers he’s comparing, he’s only off by about a decade in the computer power.
      Really, the point isn’t to get an exact number for how much faster computers are today than in the past, it’s to know that computers today are already faster than what we need, if we just stop chasing each other on the pixel/graphics/horsepower treadmill.

  18. Steve C says:

    Is that 16 years for a single computer or 16 years for all the computers that existed combined?

    1. Shamus says:

      Single computer.

      I’ve been thinking about this since I posted it, and I keep coming up with reasons it should be faster or slower than the 16 years I gave. Either way, it’s a long time.

  19. Nathon says:

    Moore’s law actually doubles transistors every 18 months. That doesn’t affect your final number very much, since you’re always waiting on I/O, but the 2 million number is bigger.

    I don’t think you can apply Moore’s law to computers from before integrated circuits. Moore was from Intel, and he definitely wasn’t talking about tubes.

    Still, you’re probably not wrong in your original claim about how absurdly clunky the computers that sent people to the moon were. Fun fact: modern semiconductors have problems in space that the original Apollo computers did not. Their transistors are so small that cosmic rays (really!) can flip bits. Space people call these events single event upsets (SEUs) and have to build special expensive computers to either make them not happen or tolerate them.

    1. Steve C says:

      You can take Moore’s law before transistors. It’s been true for each generation of information technology.

      1. Nathon says:

        I don’t particularly like that graph (exponential best-fit curve on a log chart?) but your point is taken. Way to do research, Kurzweil.

  20. sab says:

    Wait, so the harddisk speed between 1979 and 1991 only increased by roughly 50%? That to me is the most shocking revalation of this entire article.

    1. Volfram says:

      Computational power doubles roughly every 2.5 years.

      Disk storage space doubles at roughly half that rate. Network speeds double at roughly twice that rate.

      Or they would if Comcast didn’t practically have a government-sponsored monopoly in the US. Internet speeds in Europe and Japan are closer to the curve. I was hoping Google Fiber would disrupt the existing industry, but the closest it’s gotten to Colorado is that Century Link is finally starting to offer fiber internet.

      I/O bus speeds increase more slowly than disk sizes, apparently. Which actually matches my experience with USB flash drives.

      1. ET says:

        Bus speeds do, in fact, increase more slowly than memory size, which leads to the Von Neumann bottleneck.

    2. guy says:

      A major contributor to hard disk access times is that the data can be a whole ten centimeters from where you want it to get to, and that hasn’t changed much.

  21. Tychoxi says:

    Really liked this post!

  22. rayen says:

    16 years and then some dick taps the w button and the whole thing has to render again.

  23. Rack says:

    I remember doing some back of napkin calculations on this theme last year and worked out if I took my laptop back in time to the mid-60s it would be roughly as powerful as every computer in the world put together including itself.

    Kind of mind blowing.

  24. Steve says:

    Just for fun, look up how long it took to render the movie Toy Story, versus how long it took to render Toy Story 2. Or look at the history of movies that have been animated by computer. It’s interesting to see those statistics.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Infinitron Cancel reply

Your email address will not be published.