Experienced Points: Game Responsiveness is More Than Just Good Frame Rate

By Shamus Posted Tuesday Nov 11, 2014

Filed under: Column 51 comments

This week we’re talking about the growing complexity of our gaming machines and how that impacts the controls.

Sometime in 1983 or so I tore apart my first Atari joystick and saw how it worked. The joystick had broken – probably from too much frustrated twisting on the part of the user – and no longer moved left. Inside, the device was so simple that even my 12 year old self could immediately intuit how it all worked. It was a simple square circuit board with five metallic “bubbles” on the surface. The bubbles represented the fire button and the four ordinal directions. When a bubble was depressed (from pressure from the joystick or the button directly above it) a circuit was completed. That was it. You could toss all the joystick bits away and play directly on the circuit board, if you wanted.

This also let me mess around with unintended scenarios and see how the game logic was set up. In normal usage circumstances you can’t move the joystick both left and right at the same time. But if you’re manipulating the contacts directly you can press both bubbles at once and see how the game responds. Now, in those days they coded right to the metal without using any fancy programming languages, but conceptually there are two ways to set up this sort of input logic. In C++ it might look like this:

1
2
3
4
5
6
if (joystick_left) {
  x = x - 1;
}
if (joystick_right) {
  x = x + 1;
}

In this case, the test for the left and right joystick buttons are done independently. If you hack open the controller and push both, then the game will try to move both left and right at once, and the movement will cancel itself out. However, another coder might realize that (under normal operating conditions) it’s impossible for the user to go both left and right at once. So, they could save a couple of trivial CPU instructions by skipping the check for right movement if left movement is active:

1
2
3
4
5
if (joystick_left) {
  x = x - 1;
} else if (joystick_right) {
  x = x + 1;
}

In this case, pressing both buttons would just result in moving left. I found examples of both types of input logic in my game collection.

All of this came together in my head as I learned BASIC for the first time and came to grips with how computers worked. It was exhilarating to realize that computers were something I could comprehend. They weren’t magic. They weren’t rocket science. They were simple devices, and you could figure them out just by experimenting with them.

Sadly, this is no longer the case. I doubt 12 year old Shamus would be able to make any such mental leaps if you gave him an Xbox with a wireless controller.

On the other hand, the games would blow his little mind.

 


From The Archives:
 

51 thoughts on “Experienced Points: Game Responsiveness is More Than Just Good Frame Rate

  1. DaMage says:

    It’s an interesting idea that all these levels of abstraction have come full circle and now after effecting the quality of the games again. You wouldn’t want the game engines of the 90’s (I’m thinking DOS era) where you just had a block of memory to work with and some basic devices.

    But now your game engine have hundreds of components, some built by different companies, all running together using libraries by Microsoft (OS, directx), OpenGL, Logitic (Controllers) talk through to monitors that were also built by a different company that has adopted the standard interface over the top of their work.

    It’s a nightmare of compatibility and APIs, and I would not be surprised if it’s having a negative impact on responsiveness.

    One aspect in the article I think could have been explain better was how controls are stored. From what I’ve seen the OS queues up what has happened and then you game engine will at the start of a cycle run through all the commands since it last checked. That means pressing a button will be stored until the current draw cycle of the engine is complete as even more time.

    Unless the controls can interrupt….but considering the abstraction I very much doubt this.

    1. ET says:

      Actually, the correct way to do the controls in a game engine would be to use an (OS-provided) interrupt signal of some kind (stuck in a separate thread specific to control-handling). This would let you:
      1. Have the controls layer of your system be as responsive as possible to the buttons being pressed.
      2. Have this layer consume no processor power when nothing’s happening.

      #2 probably isn’t much of an issue anymore, since the amount of CPU on a modern machine is vastly more than you need to loop over even a thousand input buttons, sticks, etc. However, #1 is still important, especially since the programmer could easily get a non-interrupt version wrong. For example, looping over the input buttons at 6 Hz, instead of every 60 Hz (simple typo). Or they could lock it to 60 Hz, when the frame-rate is set to 30 Hz or 120 Hz. This either wastes input resolution the player won’t see, or has the display more responsive than the controls – wasting the price of the fancy monitor.

      1. DaMage says:

        Having an interrupt for the controls really wouldn’t help much as anything on the screen would have to wait until the next pass anyway. However as you said, separating controls into a separate thread is an easy to start using threads in a game engine, as it detaches really well from the rest of the engine.

    2. DrMcCoy says:

      Shamus also handwaved the whole non-trivial USB communication away by saying that the button press is just handed to the device driver. :P

      1. Shamus says:

        Yes. I had to hand-wave that one. I know it’s a biggie, but I didn’t feel knowledgeable enough to describe it. (And I wasn’t sure how it would apply to consoles.)

        1. mwchase says:

          I happen to know that the way XBox 360 controllers do stuff over USB is, the OS and controller send interrupt messages back and forth at each other. One type of message the OS sends affects the LED state, and the other affects the rumble state. There’s also an initial handshake that they do, and the really juicy message is, whenever the controller’s controls are not “at rest” (if you’re pressing any button, squeezing a trigger, or moving a thumbstick), it sends a message containing a snapshot of the current controller state, packed into 20 or so bytes. All button presses are bit flags, triggers are an unsigned byte each, and thumbstick axes are sixteen-bit integers stored in twos complement.

          So, hidden inside a stream of bytes, the d-pad is still four sensors that flip a bit somewhere when you trigger them.

          1. Decius says:

            >So, hidden inside a stream of bytes, the d-pad is still four sensors that flip a bit somewhere when you trigger them.

            Implying that it is still possible to hack the control to press left and right at the same time.

            1. Daemian Lucifer says:

              Wouldnt work though.Try and press left and right on your keyboard in any game,and youll see that one direction will always be the primary.I mean,there could be games that dont do this for some reason,but I have no idea which ones.

              1. guy says:

                Nothing actually stops you from doing that, people just usually write directional control stuff with if-else statements, so whichever they happened to put on top gets executed if multiple conditions are met.Otherwise you’d try to go left and right simultaneously. If the responses to left and right make sense when done simultaneously, you could write them as separate ifs and do both in the same pass.

                For instance, if you bound the arrow keys to different weapons in a space sim, left and right could be used together.

  2. AR+ says:

    Then you have people actually opposing 60 FPS on aesthetic grounds. Apparently a holdover from when films continued to be shot in 24 FPS despite better tech being available because personal camcorders were at a higher FPS first and so it became associated w/ cheap production. Enough of a shame that this has kept films at their terrible 24 FPS all this time, but now it’s spreading to games.

    1. Theminimanx says:

      If by people, you mean: ‘corporations looking for an excuse’, then yes. But I can’t remember seeing any consumers saying 30 fps is better, only that they don’t notice the difference.

      1. NotDog says:

        Well, our very own Chris wondered if 30 FPS could be a legit aesthetic choice a couple of Diecasts ago…

        1. poiumty says:

          Forgive him, Lord, for he knows not what he says.

      2. jawlz says:

        When you had some of the newer HD televisions with 120 and 240 hz refresh rates and systems that would interpolate half-frames in order to give very high frame rates, it was pretty common to hear consumers claim that movies looked like ‘soap-operas’ (which were shot on video) due to the increased frame rate.

        1. Eruanno says:

          Look, I like 60 fps in video games as much as the next person (and think that the idea that video game frame rates versus film frame rates are in any way related is silly), but the frame rate interpolation technology that many HDTVs incoporate is fucking awful. I turned it off instantly on my TV when I bought it and I refuse to turn it on because it takes whatever is on the screen and takes a huge dump on it while adding some nice input lag just because.

          Fuck frame interpolation tech in HDTVs. It is the worst. Literally Hitler.

          1. Chris says:

            Agreed. HDTV frame interpolation is an unforgivable crime against my corneas.

      3. Cybron says:

        Oh they exist. Most of it is fanboy apologists defending their favorite developer, but they exist.

        1. Chris says:

          I’m a bit split. I remember specifically updating my gfx card halfway through DA: Origins, and my FPS jumped from the upper 20s to a solid 60+. Control felt better, but the animations took on an overly smooth cheap-CG look that wasn’t there at 28fps. For twitchy games, I get the 60fps requirement, but for everything else I’m with other Chris: it can be an aesthetic choice.

          Not having it dip down to 10fps during heavy spellcasting was obviously an unqualified improvement.

          1. Felblood says:

            I think that, in time, developers and consumers will learn to deal with frame rates the same way they are finally starting to learn to deal with resolutions.

            More resolution is only better if your entire system is engineered to support it. A higher resolution just helps you rub your face in the awfulness if the textures, models, engine or animations aren’t high enough quality to endure that level of scrutiny.

    2. jawlz says:

      Well, the differences between film and early video were more than just framerate, though. Film’s picture resolution was FAR higher than that of video, and (at least in the US with the NTSC standard), video was interlaced as well which essentially halved the resolution of the picture and only allowed half of the picture of each frame to update at each screen refresh.

      I seem to recall that PAL was progressive and not interlaced, but again the low resolution bug pops up.

      Theoretically, film (especially 70mm film, though that’s largely been abandoned) *still* has a higher resolution than digital/HD, but it’s close enough that most people don’t care.

      Technologically speaking, there’s nothing that requires to keep film at 24 FPS – if they want to shoot it faster and play it faster, they can. But a number of people *do* like the aesthetics of 24 FPS on the big screen, for whatever reason.

      1. Eruanno says:

        If we’re referring to PAL as the “ye olde TV standard of Europe/Australia/etc” it was interlaced too.

        Also, if we were to introduce 48 (or 60, or whatever) FPS as the new standard to shoot movies, there would be a big problem. There is no way to deliver it to consumers at home (except for streaming services like Netflix, Hulu etc). There is no standard in the DVD or Blu-ray format for anything over 1920×1080@24 FPS at this time. You’d need to, somehow, patch that in to all the old DVD/Blu-ray players of the world. DVD players are pretty much fucked, most of them have no internet connection and most of them are way out of any support from their companies. Blu-ray players have, for the most part, ethernet or wifi – but you’d still need to force-upgrade everyone somehow to make your new Blu-ray tech compatible. Fragmenting your userbase is fuuuun.

        (This also means there is no way outside of cinemas to watch The Hobbit in 48 FPS right now.)

      2. Peter H. Coffin says:

        Plus there were non-technical “soap opera” visuals too: shooting was constrained to very small sets with limited shooting angles; multiple stories meant having to keep LOTS of different sets up and usable for weeks at a time. That meant short lenses, which foreshortened spaces even more. That mean the opportunity to play with bokeh/depth of field was very limited. It meant that all the lighting had to be overhead and bright enough that reflected light from props and panels out of shot could soften the inevitable oddball shadows.

      3. Humanoid says:

        Apparently everything down to 16mm film can take advantage of 1080p resolution.

    3. Kingmob says:

      The problem here is that video games don’t look the same at 24Hz as films do. Motion blurring does not exist unless you write a filter and that filter works equally well at 24Hz as at 60Hz.

      And that avoids the issue that all motion blur really does is hide a lot of detail that you’d normally see. Which is why movies look ‘fake’ at 60Hz, because everything is too detailed. The illusion is shattered if you don’t invest enough in your set and CGI, reality is replicated too well. Another problem games don’t have.

  3. Joe Informatico says:

    Your mention of “black box” systems reminded me of an argument two friends once had over the definition of “transparency”. One is a professor of computer science, and he argued that transparency meant something that was invisible or unnoticeable. The other friend has a degree in political science, so to him transparency meant something above board, visible to all. They were both right coming from their own contexts.

    1. Stu Friedberg says:

      Heh, I had a similar footnote in my late-1980’s PhD dissertation. Information hiding (for both abstraction and security) was a significant aspect of the work, not all my committee members were from the same field, and defining “transparent” and “opaque” turned out to be crucial to avoiding defense-hindering objections at the 11th hour.

      I just tracked it down: “There is a regrettable clash in the use of the terms transparent and opaque. As ordinary words they are contradictory, but as technical terms they both indicate that certain details are not visible to the user. The operating systems community uses phrases like ‘virtual memory is transparent’, while the programming languages community writes things like ‘this type is opaque.’ The recent [1980’s] interest in object-oriented systems has brought both communities, and their jargon, into intimate contact. We will consistently use ‘opaque’ to indicate domain, and thus visibility, boundaries, and ‘transparent’ to indicate that an object’s abstraction, and thus its interfaces, does not define all of its interactions with external resources. If this muddies the water further, we ask to be forgiven.”

      So, for my purposes, opaque meant black-box and transparent meant white-box, more or less.

      1. Just curious, was the impersonal ‘we’ used here the standard for PhD theses of the day? Today I mostly try and encourage students to actually use ‘I’ in the thesis, to avoid the impression that they did not do their own work that a ‘we’ might imply. That, or use the horrendous passive voice. Pardon, that or the horrendous passive voice is used :)

        1. Daemian Lucifer says:

          Really?Because in every scientific paper Ive read passive and we were used and I was never mentioned,whether it had a single author or multitude of them.Granted I focused only on european papers,so maybe standards differ somewhere else.

          1. Wolf says:

            Always using “we” has the added advantage that you can copy paste into collaboration work without many “I”s sneaking over.
            But I am from natural science, we use so much energy on citations that we don’t have to imply who did what using language. Which is good since many mathematicians suck at language.

            1. Daemian Lucifer says:

              No we dont.We just speak a more profound language that the scrubs arent proficient with.Let the commoners speak their common,and we will continue using our much more refined devices.

        2. Zukhramm says:

          I always understood the “we” to be the team or person who wrote it and the reader.

          1. Tizzy says:

            Many style gurus in mathematics will encourage the use of “we” in papers even by single authors for precisely that reason. I don’t know how it works in other fields, but it seems to me critical in math and theoretical CS, as your paper is basically walking the reader through your chain of reasoning.

        3. Tizzy says:

          In my field, students have advisors and it would be really rude to discount their input. So we it is in dissertations.

        4. Stu Friedberg says:

          Michael, the editorial “we” was standard practice for the time and field. My impression is that this is still standard practice in many places. I wrote my acknowledgments in the 1st person singular. The body of my dissertation did not use the 1st person in either number very heavily, as I was not describing the conduct of experiments. “We” was used primarily in transitional paragraphs, guiding the reader through the presentation. This is consistent with Zukhramm’s and Tizzy’s first comments, although “we” was definitely not intended to mean “My advisor and I”.

  4. Here’s something fun to fiddle with Shamus. Gamepad HTML code http://forums.shamusyoung.com/viewtopic.php?f=14&t=789

    Well, HTML + Javascript but, well I assume that is obvious.
    Combine this with WebGL and you could do web games with a gamepad as input.

    1. Dave B. says:

      Oh yeah, I tested that when I saw it on the forum. It works great with my 3rd-party xbox360 controller. Now I just need to think of a use for it.

  5. Patrick the Angsty Tearful says:

    “probably from too much frustrated twisting on the part of the user”

    OR…to much rage pounding the hapless joystick against whatever object was nearest during a profanity filled, asthma medication induced ‘roid rage because the F!@*^%G BLACK DRAGON CHEATS!! IT CHEATS!!

  6. rofltehcat says:

    Note: This is a copy of the post I left in the Escapist comments. The Question was sort of answered although the person who answered it focused on V-Sync. I don’t really know how this translates to technology like G-Sync but I guess it might be comparable to the tripple-buffered V-Sync described in the answer comment. (answer can be found at http://www.escapistmagazine.com/forums/read/6.864944-Game-Responsiveness-is-More-Than-Just-Good-Frame-Rate#21603882)

    Original Post:
    Shamus mentions that a huge part of the delay is coming from the frame buffer/queue (there may be 3 frames in front of the current frame waiting to be sent) and the monitor refresh rate play a huge role.

    Does technology like G-Sync reduce that delay by a noticeable amount? Or are there better ways to reduce the delay?
    (It sadly isn’t likely spread because nvidia and amd are not cooperating on sync standards but I’ve heard Linus talk about it on the WAN show and he seemed pretty convinced.)

    1. Robyrt says:

      G-Sync can reduce your lag by cutting down on the time before the monitor draws the next frame, depending on how fast your computer is generating new frames. If you’re already running at exactly 60 FPS, G-Sync won’t have any effect, but it makes a big difference if you’re running at 40 FPS and the monitor has to wait for the computer to catch up to its internal 60Hz clock.

    2. guy says:

      The delay there actually comes from the synchronisation stuff. Otherwise the frames would go to the monitor immediately. However, you really don’t want to turn off synchronization. The more frames your synchronizer sends per second, the shorter the buffer delay.

    3. G-sync is Nvidia solution. However AMD and VESA worked together and a open solution called Adaptive Sync that is added to the DisplayPort 1.2a standard.

      Most of the newer series cards from AMD support this (driver/software update), the newest Graphics Core Next cards support it for gaming as well. And if a game do not support it natively it can be forced in the drivers.

      The changes needed to a monitor is less/cheaper than with g-sync and it’s using an existing standard, it’s also royalty free, whereas g-sync probably comes with a license fee.

      Nvidia will most likely support Adaptive Sync, anything else would be odd as it literally costs them nothing to do so.

      The way Adapative Sync works (crudely explained) is that the graphics card asks the monitor “What is the minium response time and maximum response times you can do”.
      If a monitor replies 4ms to 1000ms then that means it can handle framerates from 250 FPS to 1 FPS.
      If a monitor replies 5ms to “none” then that means it can handle framerates from 200 FPS to a still image.

      I wrote about this stuff on Gamasutra a long time ago, asking the game industry why the heck vsync is still the limitation as the monitors are capable of faster response times than that.

      The benefit of Adaptive Sync for those that do not know is that in theory a game can pass out 60 FPS then drop to 58 FPS then back up to 60 FPS, the same as if vsync was not on but without any tearing. With vsync previously you would go from 60 FPS to 30 FPS then back to 60 FPS which would be jarring.

      Adaptive Sync should also in theory make it easier to avoid/reduce frame stutters.

      In time we may find games taking advantage of Adaptive Sync directly by dropping the framerate to 24 FPS with GPU heavy frame blending effects to simulate film look for the cutscenes, then ramp back up to 60+ FPS and less heavy GPU effects for the gameplay.

      I have no idea i the HDMI standard will get something similar, I hope it does as this means even “normal” TVs will begin to support Adaptive Sync.

      Breaking free of the old V-sync limitations is long overdue.

      PS! Adaptive Sync will not cause tearing as each frame is sent in full.
      PPS! 3D-VR (like Occulus Rift) will also benefit from Adaptive Sync.

      1. MichaelGC says:

        Fascinating, thanks!

      2. rofltehcat says:

        Thanks!
        So it is especially useful when the frame rate varies, right? So you perceive less of a difference when your frame rate fluctuates between e.g. 35 fps and 60 fps?

        This sounds incredibly useful because most games don’t seem to run at a constant 60 fps unless the pc is incredibly overpowered.

        1. *nod*

          Basically the graphics card knows the limits of the display when connected/you start the PC.

          From then on the graphics card just goes “Hers’e a frame, display it” … “here’s another frame, display it.”

          As long as the graphics card do not send frames to the monitor faster than the response time then you will never have tearing.

          (I’m oversimplifying things a bit, as far as I understand it there is some extra circuitry/code in the monitor to handle cases where frames are sent to fast).

          Monitors exist that have response times down to 2ms or 1ms these days, which in theory could handle 500 FPS or 1000 FPS, but beyond 250 FPS you get diminishing returns, just like you get diminishing returns at beyond 300 PPI resolution.

          If you can score a Adaptive Sync IPS panel monitor (or S-PLV or whatever acronym hell the monitor makers are using, there is one that has better black levels than IPS for example) or better with 4ms response time then you are set. And if it’s 300 PPI as well then you’ll never need to buy a monitor ever again.

  7. Robyrt says:

    Here’s a way more in-depth explanation of TV input lag, from the competitive fighting game scene where they still use CRTs. A similar situation exists in music games – Rock Band / Guitar Hero have input lag adjusters that will backdate your inputs to match the display, since the whole game is based on high-speed input matching. Musicians can detect small amounts (10-20ms) of lag quite accurately, because they have a lot of experience with zero-lag real instruments.

  8. Wide And Nerdy says:

    Personally, I just like how much smoother things look at high frame rates. I didn’t really appreciate it till I started playing Fallout New Vegas which I thought looked good enough without modding (save the faces) but it does sort of help break down the barrier between the user and the world. It led me to use much more conservative graphics mods with that game so that I could keep the frame rate while fixing the most obvious flaws.

  9. poiumty says:

    So the ultimate takeaway here is that playing games on a TV like a dirty console gaming peasant is inherently inferior. Got it.

    1. Daemian Lucifer says:

      Well of course,that is a given.

  10. Daemian Lucifer says:

    The whole FPS & p thing can be summed up by the futurama “Its bigger” clip.

  11. Since this is programming related.

    Visual Studio Community 2013, and for those too lazy to scroll down and click on DVD5 ISO image, well… figure it out! :)

    For some reason downloading the VSC dvd requires no login to Windows Live but trying to download the VSE (Visual Studio Express) DVD requires a login to Windows Live, odd.

    BTW! My advice is to get the DVD image, sure the web installer is just a couple meg and the DVD iso is like 7GB, but with the DVD you got the option of burning it on a DVD, putting the content on a USB or mounting the iso as a virtual DVD (or just “unzip” it using 7-zip.
    I always prefer installations that are possible offline (no net access required).

  12. MrGuy says:

    This is also why motion controls are incredibly hard to use as an input source for realistic-feeling games.

    Let’ say you want to chop someone in front of you with a sword. There’s a delay between your brain thinking “chop that guy!” and any action you can take. That lag feels natural, because it’s how your body always works. If the action is simple to take, and simple to recognize like a button push, there’s little time wasted in the action – the lag between “muscles receive signal from brain” and “game controller receives and recognizes input” is very small. So almost all the extra “non-natural” delay is in all the steps Shamus mentions here.

    But with motion controls, there’s an additional and significant lag – the time between the player starting to take an action, and the time the system recognizes that an action is being taken and of what type. If I’m swinging a Wiimote for my sword swing, there’s the time for the accelerometers to notice something’s changed at all, and then some non-zero period to collect enough data to determine “Oh! It’s a sword swipe (and not a parry, or a duck, pistol shot…). That sampling period is likely a reasonably significant fraction of a second, which happens AFTER my brain has said “swipe sword now!” and sent that message to my muscles, but BEFORE that input is available to the game.

    Motion controls introduce yet another significantly laggy step between my brain wanting something to happen and the game being able to make it so. We’re already on the edge of what people will accept as “acceptably responsive” for games. And motion controls add another significant source of delay.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to AR+ Cancel reply

Your email address will not be published.