User Reviews

By Shamus Posted Wednesday Nov 27, 2013

Filed under: Video Games 102 comments

The question was put to me on Twitter:

Interesting question. I don’t go to Metacritic very often, but assuming by “user” section we’re talking about the place where users submit reviews and scores, then I guess it depends on what you value in a review. Some people think review scores should converge on some hypothetical One True Score that accurately reflects the value and quality of a game. Some people (me kinda people) aren’t as interested in scores, but instead value the opinions of specific critics. Other people think of game critics as corporate shills and sellouts and only “true” gamers can be trusted.

On one hand, users often write little text blurbs to go with their ratings, and they often come out like this:

Best game ever
much better then GTA IV
i played it last night and loved it
looking forward to the online
The driving physics are worlds better than they were in GTA IV. I keep finding myself driving around aimlessly because it’s so fun to drive

(Actual review, as it appeared. Score given was (of course) 10/10. I’ve stripped the name to protect the incoherent.)

I’m pretty sure the formatting is a result of ineptitude and not some kind of avant-garde attempt to present a review in free verse. In any case, this is a pretty good example of what users have to say. It’s easy to look at stuff like this and conclude that this person’s post is worthless because they can’t think or communicate clearly and their criteria for appraising a game are murky and somewhat suspect.

User reviews are also prone to shenanigans. People who hate a franchise will tend to score titles at ZERO to “balance out” all those brainwashed sheeple who like the game. And people who like the game will give it a perfect 10 to balance out all those haters. And then there are review-bomb protests. And then there are occasions of ballot-stuffing by marketing. User scores are – by necessity – scores by anonymous people, which means that they’re every bit as accurate as any other online poll.

So user reviews are simplistic, barely literate, and completely unreliable. I guess we’ll have to trust in professional reviews then?

The thing is, most professional reviews are created by people who aren’t playing games the way consumers do. They play more games, they have less control over what they play, they’re obligated to finish uninteresting games, and they’re often tasked with playing stuff outside of their area of interest.

Example: I can’t stand modern military shooters. Call of Warfare et al. seem to be engineered to repel and offend me on every level. The cartoon machismo. The lack of player agency. The mixture of “realistic” geopolitical conflicts with childish black-and-white morality. The horrible dialog. The hyperactive roar of continuous gunfights with little variation in mood or pacing. I wouldn’t spend my free time playing these things, and I certainly wouldn’t pay AAA dollars for them.

But if I was a professional reviewer, I’d probably have to. And if I wanted to keep my job I couldn’t just eviscerate the genre when the latest release of Sound & Fury was dropped on my desk. I’d be expected to play the damn thing and construct a review that would be useful to the public at large. I’d be obligated to review the game for other people. And I’d have to do so in a week. And if I did like it I’d still be obliged to move on next week and play something else, so any further play would need to be on my own time, at home. (Which would probably mean starting over, depending on platform and how feasible it is to port saves.)

If you put me in that situation, I’d probably fall into the habit of reviewing games the way a lot of journalists do: I’d value spectacle and accessibility over depth. I wouldn’t want a game to take too long and I wouldn’t want it to ask too much of me skill-wise unless it was in one of my favorite genres. How would I assign scores? Pffft. I don’t know what military shooter fans like in a game, so I’ll rate on polish and novelty instead of getting tangled up in discussing tricky stuff like themes or tone.

Some time ago I saw a chart (I can’t find it now but I’ll add it if someone will link it in the comments) showing how user Metacritic scores lined up reasonably with professional reviews until a few years ago, at which point they began to diverge sharply. Big-name publications continued to give top marks, while users rated them lower and lower. You can read this either way:

  1. Reviewers continue to rate based on gloss and spectacle while existing fans insist on narrative quality. OR…

  2. Professionals continue to be even-handed while users become increasingly entitled and demanding.

Does it matter? BioWare’s recent transformation from building games focused on lore and theme to games focused on gameplay and action was not accidental and it had the intended effect of bringing in lots of new fans. Who is the professional reviewer supposed to review for? Should they guess how hard-core, old-school fans will feel about a game, or should they try to guess how the general gaming public will respond? You likely know which camp you’re in already.

Getting back to the question of user reviews: Pros reliably rate games on a very narrow set of criteria, while users tend to review from the gut. The latter channel is a noisy one, but I’m of the opinion that more data is always better.

One user review is probably useless, but given enough data points we can construct a very rough but useful picture of how well a game is connecting with the hearts of gamers.

EDIT: Originally I said the example reviewer had a “worthless” opinion, when I was trying to say their review is worthless. (Their opinion is just as valid as anyone’s.)

 


From The Archives:
 

102 thoughts on “User Reviews

  1. Paul Spooner says:

    I agree that assigning numeric values is difficult, but I’d still like to see some “units” on these numbers. Sadly “out of ten” is not a unit, it’s a normalization, which is even worse because now the values are unitless and normalized!
    If a number is going to mean something, we need to know what “one of” that thing is. Here are a few units I’d like to see show up from reviewers. Please add your own in response.

    Endurance: Hours played of my own free will when I could have been doing something else productive with my time.
    Engagement: Number of times my wife has to say my name before I’ll pause the game and talk to her.
    Net Price: $ value of the purchase price of the game, plus the value of the time that it takes to babysit it through installation, DRM activation, control configuration, etc.
    Interaction Duty Cycle: Ratio of time spent “playing” (interacting) with the game to time spent “watching” (entertainment).
    Error Duty Cycle: Ratio of time spent “in” the game to time spent restarting due to bugs, game crashes, waiting at loading screens, etc.

    1. Decius says:

      +Replayability:
      Number of times I expect to play a full game.

      1. Peter H. Coffin says:

        Even that’s gonna flex a lot. What’s a “replay” for WoW or GW2? What makes those worth “doing over” in comparison to Minesweeper?

    2. What is needed is a rubric. Without a rubric numeric value is meaningless. I hate being asked as a reviewer to give a star assignment or numeric value- my idea of “middling” is different from someone else’s.

      1. Paul Spooner says:

        Agreed. A rubric (I love that word!) is the bare minimum for aggregating “scores”. I’d still prefer a multi-dimensional unbounded unit-based scale though. Not everything can be aggregated to a unity of quality.

    3. ET says:

      Some of those would be affected by your machine’s specs, so it’d probably be best to list your machine, plus the min and recommended specs for a game.

      I’d add:
      Rage quotient – time until game makes you ragequit and uninstall.
      WTF count – number of WTFs exclaimed per hour, due to rediculous story, dialogue, completely unbalanced parts of systems which should have been caught during playtesting, etc.

  2. Cybron says:

    I would love to see that chart re:Metacritic. I find the whole divergence pretty interesting.

    I know I personally tend to trust the user reviews more (though not by much). Even when they’re stuff like zero review bombing campaigns, they at least tend to reflect what people are feeling.

    1. Primogenitor says:

      (This sounds strange, but I’m going to say it anyway)

      I like the ratings of user reviews – thumb up/down. It seems to do a reasonable job of culling the unintelligible reviews and highlighting the thought out ones.

      Of course, that creates its own problems – upvoting amusing rather than accurate reviews for example.

  3. Moriarty says:

    I don’t think there is any reasonable difference between games sales and customer review scores. Once people already bought something they are likely to defend their predetermined opinion of the game, as we’ve seen in every console generation so far.

    Also, the whole “10 out of 10” rating system is really muddy and if we can’t even figure out what the ratings should mean, we can’t possibly interpret them in a useful way.

  4. The Schwarz says:

    I had a great demonstration of how awful Metacritic can be a while ago:

    I wrote a review of Gone Home for my blog. One person – not exactly a troll, let’s say half-troll – started arguing me, and one of his points against the game was that it had a huge discrepancy between the user scores and reviewer scores in Metacritic, and that the users didn’t like it so obviously anything I say in the game’s favor is stupid.

    So I looked closely at the user ratings and reviews, and found out that:
    1. About 40% of the users gave the game a good rating, generally 9 or 10.
    2. Of the 50% that gave a decisively bad rating (i.e., red), the vast majority gave a score of 0-2 and cited one of three reasons: “There’s no gameplay”, “The game is not challenging” or “Too short / too expensive”.

    So the game has a user rating of 5.2, but about half the people who rated it did so either for obviously stupid reasons, because they didn’t understand the game or because it wasn’t what they expected. But if you just look at the score you’d think it was an overall pretty shitty game and the critics were just being pretentious or something.

    Is this a review system you’d want to rely on for *anything*? Not me.

    1. Flock of Panthers says:

      You’re right, the problem basically boils down to an average system that doesn’t account for any differences between what the various users were after, or why they are voting so. A criteria based system would be a lot better for honest revies, but still wouldn’t help spam.

      James’ Gourmet Sandwich Cafe
      Great food, quality service, wonderful atmosphere, but they don’t tell you that service charges are on top of listed prices 1/5
      Damn good sandwiches 4/5
      The barista is a real looker 5/5
      This is outrageous, I wanted pad kee mao. Never again. 0/5

      Or, there’s always this http://xkcd.com/937/

      1. Bryan says:

        Also this:

        http://xkcd.com/325/

        But that was the one I was going to post too…

    2. ehlijen says:

      ” It's easy to look at stuff like this and conclude that this person's opinion is worthless because they can't think or communicate clearly and their criteria for appraising a game are murky and somewhat suspect.”
      – Shamus

      “So the game has a user rating of 5.2, but about half the people who rated it did so either for obviously stupid reasons, because they didn't understand the game or because it wasn't what they expected.”
      – The Schwarz

      First off, I’m not saying either of you are wrong. I feel compelled to note however that you both chose phrases that sound dismissive and possibly insulting, even though that wasn’t the point of what you’re saying. (I think Shamus was actually speaking ironically, but I must admit I’m not 100% sure, sorry.)

      At the end of the day, we are talking about opinions on games. Whether or not you believe in the idea of wrong opinions being possible, using words such as ‘worthless’ or ‘stupid’ won’t do anyone any favours in a discussion about what people like.

      I understand that there is a problem with a complete lack of enforcement of any kind of standard on user reviews and that this often results in users who do not understand or agree with the purpose of the product or the review system itself giving out misleading scores.

      The problem isn’t that people have different opinions. Shamus is allowed to dislike military shooters. Military shooter fans are allowed to think Shamus is wrong. The problem is that these opinions are made to compete towards a single overall score, which leaves said score useless; but useless doesn’t mean wrong. Apart from dishonest voting, another problem, it is still a correct average of what everyone (who votes) thinks of the game. It’s just that people want to mostly only know the opinions of people with similar interests. If there was a way to filter down to ‘everyone who likes and has experience with the genre’, that’d solve a lot of problems.

      1. Decius says:

        The problem is that user reviews are from all users who submit reviews, and that reference class is horrible for me. I want to know what people like me think, and it’s not impossible in this day and age to do that.

        Have every user rate as many games as they care to, 1-10, no comments. Then allow people to view other users’ ratings of other games, weighted by how well their ratings match on games that both users have rated.

        1. Ingvar M says:

          Whenever I see a mean score, I also want to see at least the standard deviation and maybe a median score. Let us look at a few examples (all scored as x/10):

          Half the users slam the game: 0/10
          Half the users love the game: 10/10
          Average score: 5/10
          Std dev: 5

          All the users are pretty much “meh”: 4/10, 5/10, 6/10
          Average score: 5/10
          Std dev: Somewhere in the region of 0.5-1

          Just by having that extra small piece of information, we suddenly have a good idea that the first game is very polarized and if it is in a genre I like, I may either love it or hate it. In the second, if it is in a genre I like, I may pick it up, if I run across it on a day when I feel like spending money.

      2. Mistwraithe says:

        You can’t generally tell if you would like a game by looking at the metacritic user review score.

        But you usually can tell whether you would like a game by reading a selection of both the high and the low/medium user review comments. There is a massive amount of information available from doing this, and not just from what they say but also how they say it.

        If you like deep thinking games and most of the 9 and 10 user comments are well written and thoughtful while the 1 to 5 reviews are full of spelling and grammar errors from people complaining about cosmetic features then there is a good chance the game is for you.

        Conversely if you like the latest graphics and frantic online warfare then if all the 9s and 10s are saying how much fun they are having shooting people up in multi-player and watching the realistic explosions while the 1 to 5 scores are complaining about a poor single player campaign and unrealistic story line… again, you’re probably in!

        1. Aldowyn says:

          How many people know what a standard deviation means? Might as well give a bar graph of scores. It’d take up more space, but it’s more easily interpreted by people who don’t know the statistics terms, AND give more information.

          1. Aldowyn says:

            This was supposed to be a reply to Ingvar. I was going to say something ELSE here (this!)

            I tend to read professional reviews the same way. I’ll pick some of the top reviews, some of the low reviews, and maybe some in the middle, try and get a lot of varying information and opinions. Often something will be a constant, showing what is probably a major issue or a strong point. Also helps me figure out the critical ‘consensus’, such as it is.

      3. Shamus says:

        I actually am kind of dismissive of reviews like the one I cited. I’m not saying their opinion is worthless, I’m saying their attempt at expressing that opinion has no value to anyone trying to make decisions about the game. The review is basically:

        * Best game ever hyperbole
        * Poster rates it better than GTA IV (but of course we have no clue how they rated GTA IV so we don’t know if they’re saying it’s better than crap or better than awesome.)
        * Review based on a single play session.
        * Tell us what they’re looking FORWARD to playing, not what they have played.
        * The only mechanic mentioned is “driving physics” and the only activity mentioned is “driving aimlessly”.

        There’s nothing here to tell ME if I might like the game. They could have just stopped at the first bullet point: “Best game ever”.

        Also – and maybe I’m just a prose snob – but I internally rate comments based on how coherent and readable they are. If someone talks like a drooling moron, then I assume they are a drooling moron, and who cares what a moron thinks? An occasional typo is no big deal, but if they can’t form complete thoughts then it’s just atavistic caveman howling.

        “The problem is that these opinions are made to compete towards a single overall score, which leaves said score useless; but useless doesn't mean wrong.”

        Very much agreed.

        EDIT: And now re-reading my post I see that I DID say their opinion is worthless, which is NOT what I was trying to say. Hm. Oops. My bad. I’m going to edit the OP.

        1. Klay F. says:

          Part of the problem I think, is that writing reviews for other people is pretty much useless. Most people realize this, even if they don’t know how to say it exactly. Why the hell would I write a review for other people? I don’t know what other people like. Moreover, I don’t really care. All anyone can really do is explain why they personally like a game.

          That example review you used is actually perfect for this. We know exactly which aspect of the game is most important to this person AND what said person thought of it. Also that nothing else in the game took away from that particular aspect (in that person’s mind at least). If the driving was all you cared about, then that “review” would tell you all you needed to know. Nobody (except actual critics) is/are obligated to talk about parts of a game they don’t care about. Do you care about some other aspect of the game, well then find a user or users that cares about the same things you do.

          1. Aldowyn says:

            This is why professional reviews tend towards objective descriptions of features and comparisons, to give a common ground to attempt to judge something on.

            1. Klay F. says:

              Its also why I stay far away from just about every professional review. Such things are generally an exercise in meaningless intellectual wankery and don’t give me useful information in the slightest.

              The best way to gather information is either of two methods: One is to find a reviewer that likes generally the same things you do, which you are unlikely to find, but its nice if you can.

              The other method is to find a reviewer who is very consistent in their likes and dislikes so you can form a useful baseline with which to judge your own likes and dislikes against. Bonus point if said reviewer sticks to a meaningful rubric for scoring (Ex: Jim Sterling.)

              1. Klay F. says:

                I’d like to add that the last time a actually read a review for the purposes of informing a purchase decision was something like a decade ago, so its likely I might be talking completely out of my ass. I guess I’ve been playing games long enough to just trust my intuition, along with good old fashioned research.

                1. bubble181 says:

                  Sure, sure…But metaCritic (and similar) are a great way to ease the research.Not by looking a the scores, per se, but at why or what the scores are given.
                  As has been said elsewhere: read the bulletpoints of the 2-3 “best” reviews, some middling ones, and some low ones, both pro and amateur, and you can usually form a fairly coherent picture of the game. For one player, multiplayer is all-important and they give a 3/10 because it’s single-player-only. Another player just wants to have a good story and doesn’t care about MP – they give a 10/10. A professional reviewer may note that “it’s a shame there’s no MP, as it has promise” and give a 7/10.
                  Which points matter to you, of course, you have to determine yourself from experience.

                  It’s rare that you’ll see people break down and adore the same thing in different reviews. The 10/10 might just not mention the inventory interface, because to that person, it’s an unimportant minor annoyance. A 1/10 might say the inventory wa

                  1. bubble181 says:

                    …was a maddening exercise in frustration, which yanked him out of the game every time and cause him to ragequit. Both reviewers may be RPG-lovers who think the story is great and it all looks good – but one wasn’t happy due to a minor issue, and the other didn’t care. Are you the kind of person who might take issue with crappy ported inventory interfaces? Okay, go look it up on YouTube or something – what sort of interface is it?

                    MetaCritic doesn’t tell you “buy this game” – it tells you “if you’re interested in this sort of game, beware that it’s really buggy out of the box (does that bother you?) and you need fan patches; it has crappy inventory and the music is repetitive.”
                    Next, you go look at the mentioned points and decide for yourself who or what you agree with.

        2. Steve C says:

          > it's just atavistic caveman howling.

          I read that as altavista caveman howling. It works both ways.

        3. Duoae says:

          Actually I disagree with your assessment of the review and your breakdown of it:

          * Best game ever hyperbole (Agreed – we can ignore this)
          * Poster rates it better than GTA IV (This has some value in that they obviously like the series and think this game has improved since the last iteration)
          * Review based on a single play session (I think this can be valuable as well – going into the game the player wasn’t hit on the head with a bad experience [unlike in some games that “get better towards the end”])
          * Tell us what they're looking FORWARD to playing, not what they have played. (Not useful for the game but useful for framing the mindset of the reviewer)
          * The only mechanic mentioned is “driving physics” and the only activity mentioned is “driving aimlessly”. (Useful to me because I /have/ driven around aimlessly in games like Saints Row and GTA… or fly around aimlessly in City of Heroes. Often in a game – especially open world games – the mode of transport and whether it’s enjoyable is very important.)

          There are several things to tell *me* if I like the game:

          1) Whether I have played a previous game in the series – most importantly GTA 4 I might like this one.
          2) Whether I like/get annoyed by modes of travel in open-world games then I might like this one.
          3) The beginning of the game leaves a good impression/experience… which is a good sign!

          Yes, I’d never make a purchase from this one review but I can then look at the next 20-30 reviews and see what the tone is from them as well.

          One of the things I do on sites which track positive, negative and/or neutral feedback (e.g. ebay) is I always look at the negative feedback first and see what people are complaining about. It doesn’t matter whether the seller has a -1000 score if all of those negative ratings are because the packaging was scruffy but the items were fine…

          “Also ““ and maybe I'm just a prose snob ““ but I internally rate comments based on how coherent and readable they are. If someone talks like a drooling moron, then I assume they are a drooling moron, and who cares what a moron thinks? An occasional typo is no big deal, but if they can't form complete thoughts then it's just atavistic caveman howling.” – Shamus

          Strange, I can read the person’s review quite easily. Okay, it’s not following standard rules of the English language but I can see the coherent thoughts and sentences in there. There’s no stopping halfway through a sentence. It’s just text-type. Now, I personally don’t write stuff like that but you don’t need to be particularly verbose to get across a few random thoughts – which is exactly what that “review” is. It’s a twitter-like flow of consciousness (and exactly why I don’t have an account on Twitter). There’s no reasonable indication that this person is a moron, let alone a drooling moron and I would suggest that you are probably correct and a terrible snob.

          1. Syal says:

            It's a twitter-like flow of consciousness

            That’s the point; it reads like flow-of-consciousness, which means half-formed thoughts that may or may not come together by the end.

            Alternately it reads like they were too caught up in expressing themselves to bother with punctuation and sentence structure, which implies they’re emotional and probably young. They may very well like the game just because it’s a new world to play in.

            (I don’t dismiss people based entirely on sentence structure, but I usually end up wishing that I had.)

        4. ehlijen says:

          Hm, I replied to this but the message was either eaten by the filter my connection…

          Just wanted to say thanks for clarifying. Now knowing what you meant to say, I agree fully.

      4. The Schwarz says:

        Personally, if someone gives a game a score of 0/10 because it was “too expensive” (regardless of the price, but especially if it’s a $20 game), I think I’m well within my rights to call that “obviously stupid”. I don’t think there are “wrong” opinions, but there sure as hell are stupid ones.

        But more importantly, I do think that talking about “worthless” and “stupid” is in fact helpful in this context. Usually it’s just inflammatory and detrimental to the whole debate, you’re right about that. But here the discussion isn’t about these specific opinions, but rather about the *system* and the aggregated effect of such opinions.

        To put it more simply: yes, when you argue with someone, calling them stupid isn’t gonna make anything better. But when you’re discussing the behaviors of a large community, you should definitely be aware of the sad-but-true fact that most likely, there are stupid people in it.

    3. ET says:

      Couldn’t we analyze the reviews, to see if there’s two or more sets of scores, which could then be slit into that same number of reviews?
      Wouldn’t solve the problem completely, but at least it would help.
      …I think. :|

      1. bubble181 says:

        Already partially happens – MC separates the reviews into three categories (good/bad/average) and lists how many of each there are.
        A game with 1.000 positive reviews, 50 average and 950 bad ones is a different beast from one that has 50 positive reviews, 50 bad ones, and 1.900 average ones.

        It could easily be expanded upon, though.

  5. Abnaxis says:

    While metacritic might have put some thought into aggrgating all the scores into a weighted average, I never ever go by the total review score when I want to buy something, video game or otherwise.

    Not that the scores are useless. Rather, what I do is divide users up by rough scores, into “good,” “bad,” and “average.” I then read through a small sampling from each group to see what they actually say.

    That is by far the most useful way to get consumer information I have found. Aggregated user scores and critic reviews are just too skewed (for different reasons) to be of use.

    Of course, this is completely separate from critic reviews that have an interesting discussion regarding games. I’m only talking about the “assign a score to it” kind of reviews (which unfortunately encompasses most of them).

    1. Paul Spooner says:

      This is the way I glean information as well, not just for games, but all kinds of products. Occasionally, there will be a number of “bad” reviews which then go on to describe features that I admire, or at least can endure. Shills are getting clever too though, it’s a never-ending battle for your mind.

    2. Thomas says:

      This is roughly what I do too, but I did want to put in that I don’t find the metacritic aggregate totally useless. If the game I’m looking up is below 60 then unless I’ve got really strong prior information, I’m probably going to dismiss it as a bad game. Likewise, if I see a game with a 90 or 100 then it’s at least worth reading about even if I decide tastes ultimately differ.

      It’s a good guide for a snap judgement based on it’s rough number value. Trying to treat those numbers like actually numbers (so saying a 76 is worse than an 81), or without bringing in other evidence is where the system breaks down

    3. Sleeping Dragon says:

      Yes, very much this, I think there are two essential ways of looking at/using metacritic.

      The first one, that a lot of people, and I think metacritic itself, try to push for, is what Shamus describes: treating the metacritic score as some kind of mystical sacred golden average. Following from some misguided belief that the rampant haters and the rampant fanboys/girls will cancel each other out and the scales shifting a bit back and forth the balance will set at some kind of “objective”, “unbiased” true rating. This is something that may work for, I dunno, measuring the average rainfall or something like that, overlooking that entertainment is not, at least not entirely, an objectively measurable thing.

      The second one is to treat it as a convenient place to read through some opinions. Especially if you’re leaning a bit towards some extremes in your gaming preferences or are into niche gaming. Say you’re a fan of those difficult to the point of unfairness games, are the people who praise the game praising it for challenge, responsive controls and fluid gameplay? Are the people disliking it complaining mostly about difficulty, having to restart levels due to not having pixel-perfect reflexes?

      The problem is that the second way is part of a larger process, it’s for customers who put work into making informed choices, to whatever extent they consider it necessary.

  6. Spammy V says:

    The parts of the problem with Metacritic that you labelled under “Shenanigans” are something that’s troubled me for a while. A while back I heard from my MWO news sources that some upset fans had taken to score-bombing MWO on Metacritic, and the call was out to counter their score-bombing, and I was left troubled because I realized that there was going to be no balance after this, that every score would have to be on one extreme or another to try to make a difference and that no “real” score was ever going to be found.

    I too have become rather disillusioned of score numbers and the professional reviewing circuit, not because I don’t respect the people involved but because what they’re producing isn’t really what I want. What I really want is to read people actually talking about the game and their experiences, so I get more of what I want through blog posts, forum threads, and Youtube/Twitch playthrough videos.

  7. Dev Null says:

    I find user _reviews_ on Metacritic to be quite helpful. As a general category, I mean; the individual reviews are, on average, useless, but if you skim through them you’ll usually find a few folks with something interesting to say.

    User _scores_, on the other hand, are almost universally useless. There’s always that guy who says: “0/10 – almost as bad as System Shock 2” about whom you can instantly tell that, far from ignoring him, you should in fact be running as hard as you can _towards_ anything he says to avoid.

    I find this combination amusing in a site that is built on the principle of taking everyone’s scores and mooshing them together into a bland palatable average, so you don’t have to take anything but the random numbers into account. But there you go; I still find those reveiws useful.

  8. Slothdon says:

    Unrelated, but I’m surprised you haven’t commented on John Carmack’s departure from id software. do you not have any thoughts on it? have you heard?

    1. WillRiker says:

      Personally, I’m actually excited. I think Carmack’s talent has been wasted at id; id hasn’t made a game I gave a shit about since the late 90’s. It’s clear that they’ve been without strong creative direction. I also think that we’re long past the point that making pixels shinier is the best way to enable interesting gameplay.
      At Oculus, though Carmack has the opportunity to put his incredible talents to use in a largely untapped area of gaming. I think it’s the best chance for him to have a real impact on gaming again.

      1. Paul Spooner says:

        I’m excited too! Carmack is on my A-list for the Fledgeling dev-team. I know he’s always wanted to do Metaverse development, and this move gives good precedent for switching focus once his current project is stable. A few years down the road when Oculus is standing on both feet… who knows!

        In the meantime, doing high-speed spatial transforms and perceptive bench-marking seems like a great use of his talents. I hope he enjoys it.

        1. nerdpride says:

          Huh. I had thought I heard something like this a few months ago. Deja vu?

          1. MichaelGC says:

            He joined up with the Oculus Rift guys back in August but was planning to retain his position with id too. They’ve obviously concluded the multitasking wasn’t working, and he’s now gone ahead and nixed the id part.

            (Or maybe this was all planned out in advance to get a second publicity bump? Sorry – I am always extra-cynical before I’ve had me coffee…)

  9. WWWebb says:

    I’m more interested in Steam’s user reviews. Text mining is good enough to do some basic sentiment analysis on reviews. Combine that with Steam’s knowledge of how much you’re paying for and playing each game, and they could put together a heck of a marketing engine.

    users who:
    played >10 hours of and mentioned in review
    +
    played <1 hour of and mention in review
    = people who will likely buy at

    1. WWWebb says:

      That’ll teach me to use un-escaped angle brackets…trying again since I can’t edit:

      users who:
      played >10 hours of Game A and mentioned Feature in positive review.
      +
      played <1 hour of Game B and mention Other Feature in negative review.
      = people who will likely buy Game C at X price

      1. Dev Null says:

        Heh. I run into that angle-bracket problem all the time…

    2. Alan says:

      A good point. Simply being certain that the reviewer owns the product is a big deal. (Amazon has something similar.) Knowing that they played (or at least loaded it) is even better.

      1. rofltehcat says:

        I agree. Making sure people have the product at hand makes the reviews much more reliable.

        The user reviews of Metacritic are really useless because there are both too many 10/10 and 0/10 reviews, mainly by people who don’t even have the game.
        The worst reviews are the reviews given by people who feel obligated to “balance” the whole thing: They don’t like a game that has an average of 7 or whatever and to “balance” this “too high” score, they give it a 0 although going by their game experience a 4 or so would probably be more fitting.

        Another one is simple hate voting:
        For example in Company of Heroes 2, some Russian blogger (or video maker?) decided that the Red Army wasn’t being portrayed heroically enough and the German army not villainous enough. While [url=http://rbth.co.uk/opinion/2013/08/13/company_of_heroes_2_war_game_offends_russians_28887.html]some of the points[/url] are indeed valid although extremely selective, his call for action, the result and the way it was reached are simply unacceptable.
        This resulted in a crowd, comprised mainly of Russian nationalists, spamming [url=http://www.metacritic.com/game/pc/company-of-heroes-2]Metacritic with horrible reviews.[/url]
        I’m sure 99% of the people leaving those reviews do not even have the game nor have they even tried a pirated copy to check the claims brought up by the blogger. They simply took the snips of information he presented them, accepted them as the whole truth and went to work.

        Stuff like that won’t work on Steam and I hope it will leave Metacritic’s user reviews dead in the water. The reviewer average is useful but the user reviews are utterly useless.

  10. Alan says:

    I like ratings. It doesn’t replace good reviews, but it makes it easier to filter through the many, many games available. Was I considering the game? If the scores are pretty high, I’ll probably just buy it. If the scores are pretty low, I’ll dig deeper into reviews and seek out a demo. Was I dubious about the game? Low scores mean I’ll stop looking, while high scores can convince me to take another look.

    The system can be improved. BoardGameGeek has user ratings on a 1-10 scale. You can get the average score, and it’s mildly useful. But you can get a bar graph showing the distribution. That I find incredibly useful. A strong cluster suggests that most people agree. A flatter rating suggests a less certain game. The occasional graph with multiple humps suggests a very polarizing game. It’s a lot of information about how the crowd is feeling in a quick, easy to read form.

    (For anyone new to BoardGameGeek, some calibration: a cluster near 7 is a solid rating. A cluster near 8 is among the highest rated games.)

  11. I don’t mind numerical ratings so long as the data contributing to it is from a large group. Even games that are divisive (say, a “Gone Home”) still manages to get a fairly high Metacritic rating overall, and having editorial vs. user reviews also broadens the scope of the opinions collected. Sure, there are probably shills in both sets, but for recent releases, the numbers seem to average out the 10/10 and 0/0 crowd, and games in the 65+ range are generally at least worth looking into (again, if the data set is big enough and you tend to agree with the results most of the time).

    Sadly, in this age where privacy is a joke, the only real metrics that could really give one a good picture of relevant reviews would probably involve an algorithm that looks at what you play, how long you play it, and how you rate it (among other factors) and then makes suggestions based on other user data.

    Assuming this could be done without yet another avenue of spam heading my way and so on, it might be a useful tool. However, in spite of what Google seems to think, I’d also like access to things like “what got the highest overall scores, my preferences be darned to heck” and so on, rather than the setup creating a kind of review-based echo-chamber.

  12. Andy Panthro says:

    “I'd be obligated to review the game for other people.”

    I’d argue that you should always be giving your opinion, and relating it entirely to your personal experience. After all, you cannot review things from the perspective of the majority of your readers. Sometimes attempting to find an objective viewpoint can dull the critical edge and leave a review feeling bland and lifeless.

    Giving an honest opinion is much more useful to me. Describe as well as you are able how much you enjoy or dislike it, and provide details and examples! A proper communication of your thoughts lets me consider if it’s worth my time even if your opinion differs from mine. (and with a relevant body of work, I would understand which genres or styles you favour and which you abhor, and take that into account)

    1. Thomas says:

      In isolated incidents that’s okay, but in bulk it’s churning out some useless stuff that’s useful to exactly no-one. So lets say we’ve got a reviewers history and we’re taking his genre expectations into account.

      He hates fighting games, he doesn’t understand how the controls work, he can’t memorise a combo, he can’t handle 2D games and he’s never pulled off a counter in his life.

      You read his first fighting game review and you learn these things about him and that if these things are true of you you shouldn’t buy fighting games either (except how would you know that without having played a fighting game?).

      Now 5 years later he’s reviewed 10 fighting games and every single review basically boils down to ‘doesn’t understand how combos work’. What could be more useless? He’s wasting his time playing games he doesn’t like to give us information which is the exact duplicate of what we received years ago, the people actually interested in these games aren’t hearing what they need to hear to understand the game, the people not interested in the game aren’t hearing anything new.

      It’s an opinion which is useful exactly twice, the first time (And even then it’s making the review pointless for everyone who was actually interested in the game) and the one time a fighting game comes along that doesn’t involve combos.

      There’s no point reviewing a driving game and telling people that you hate cars. The people who want to know about the game don’t share that opinion and the people who agree aren’t interested in driving games. So you’ve got to try and figure out what differentiates driving games, what people enjoy and what makes this one tick. Ignoring your hatred of cars for the good of the review.

      Even if Campster were to review every CoD game and say ‘the morals are atrocious’, what purpose is that serving? A stuttering parrot could deliver that information after the 6th game

  13. Zukhramm says:

    I’ve been banned from Metacritic. I don’t even know why, they refuse to tell me why. I guess they are trying to get rid of troll reviews, but it seems I got caught in that? I don’t know if they just look for reviews with score that are too different from the average or what.

  14. Supahewok says:

    I suspect that the chart you’re thinking of, Shamus, was the one shown during the Bioware focused TUN, which I believe was linked in your previous post of top comment posts. However, that chart wasn’t for all games, but only for Bioware’s games. As I am commenting via my phone, it is difficult for me to find and give the
    correct link, but the TUN guy is very thorough in linking the sources in everything used in his videos in the author’s comment box.

    Sorry if I’ve been ninja’d, I received a distraction while writing.

  15. Tychoxi says:

    “So user reviews are simplistic, barely literate, and completely unreliable.”

    Which sadly describes most videogame “journalists” too.

    The image you are looking for is probably this one? I have no idea where it comes from, though.

    1. Tychoxi says:

      EDIT: the guy before me pointed the way! It’s the “Tasteful, Understated Nerdrage: A Tale of Two Companies” video. since I’m not sure what happened with the link, this is what I was linking to: http://oi39.tinypic.com/pm16c.jpg

  16. Ilseroth says:

    I long ago stopped going to review sites for information not due to corporate shilling but simply because I actually felt they were often too gentle on games. Not just about only looking at polish and accessibility, finding reviews on games (other then indie or small budget) that are below a 7 is sad. I mean if all of your games are grading that good, the metric needs to change so that an “average” middle of the road game is given a middle score (5-6) so that way when a 9 or a 10 pops out it actually means “Oh snap, this is something you should be keeping an eye on.”

    That being said, I will agree that aggregate scores and a skimming the more intelligently worded mini reviews can be helpful in determining the quality of a game, but like all things, everyone has an opinion and learning how to determine who has developed an informed opinion on a game can be challenging especially considering the brevity of a lot of the articles.

    Personally, I know it is sad, but I have gone back to old school means of deciding if I will get a game. I go to a friends house and watch some gameplay (or Twitch). If it is a game not out yet, I don’t bother getting excited. I have been burned too many times on getting excited for games and having them be disappointing that I have just stopped.

    1. Kathryn says:

      >>the metric needs to change so that an “average” middle of the road game is given a middle score (5-6) so that way when a 9 or a 10 pops out it actually means “Oh snap, this is something you should be keeping an eye on.”

      There’s a reason I don’t judge when I volunteer at Odyssey of the Mind (I generally work scoreroom). (Actually, there are several reasons, but only one is germane.) Official direction from OotM World is (or was a couple years ago; I doubt they’ve changed it) that subjective scoring should average 70%. So if the element is scored from 1-10, then the average score across all teams should be 7. This makes NO SENSE to me for exactly the reason you elucidate – with only a quarter of the scale to use, how well can we differentiate between the best-perfoming teams?

      I think this is about “feelings”, because apparently it hurts children’s feelings to get a “bad” score, but it boils down to using nearly three-quarters of the scale to identify with great precision the degree of their suckitude. What kind of message is that for the kids?

  17. Disc says:

    “The thing is, most professional reviews are created by people who aren't playing games the way consumers do. They play more games, they have less control over what they play, they're obligated to finish uninteresting games, and they're often tasked with playing stuff outside of their area of interest.”

    That’s pretty odd to hear. While I can’t say I’ve followed most other magazines (or internet sites) in-depth enough to know any better, the Finnish Pelit-magazine, which I’ve been reading for almost two decades now, have said in the past (paraphrasing) that they aim to give the reviewable games to people who actually like to play games of that specific genre and/or style, since it’s more likely to result in accurate and informative reviews for the customer. And as far as I can tell, they still do that. On the contrary, the “professionals”, a.k.a the staff focuses more on the journalism side of things and write only a relatively small part of the reviews. Covering the PC and several console markets, they’ve got currently over 20 volunteering freelance writers writing the bulk of the reviews who’ve all been selected through a screening process. It’s never really been downright advertised, but the picture I’ve always had about most any of them, including the staff, is that they’re enthusiastic gamers in their own right who also like to write.

    Personally their way of handling the review business would seem to make a lot more sense, if you’re looking to get the best customer advice out there. While it’s likely as fallible as any other way to do these things, it’s a magazine I’ve learned to trust over the years, since they’ve always been very transparent and honest about how they run things.

    1. Aldowyn says:

      Depending on the outlet, this is actually how it works, minus the ‘volunteer’ part. Different authors specialize in different genres, and I’m sure they wouldn’t ever tell someone who hadn’t ever played a fighting game to review Street Fighter whatever. Obviously they don’t have full choice.

      A lot of freelance journalists actually make their living pitching game reviews to somewhat smaller sites, and obviously they wouldn’t be pitching reviews of a game they didn’t have the requisite knowledge to review.

      1. Syal says:

        I assume this is how it works everywhere (anyone who doesn’t do that fails to grasp the nature of entertainment media), but the problem arises when you have five Princess Cooking games coming out and only one person who enjoys playing them. Do you give them to someone who doesn’t enjoy them, or do you backlog them and have reviews that are behind everyone else’s?

        (And of course the answer is “You hire more people who like to cook princesses”.)

  18. Nano Proksee says:

    I could write pages on this subject and it would be all very bad in part for my lack of skills in English and in part for my lack of real academic knowledge on the matter. But in my humble opinion it boils down to this:
    All the problems you mention here, are inherited from the old media (old meaning ten years ago), the most troublesome of them all being the idea of objectivity.
    Twitter gives me a feel for the reaction of the community to a piece of media (or a product), blogs like this one give me more in-depth analysis that gets me thinking. All filtered by me, written by people I know, and without pretense of objectivity.

  19. TMTVL says:

    If I wanna know if I’ll like a game, I do research on it. If it’s been Let’s Played, that’s perfect, then I’m guarded against stuff like ending controversy.

    I don’t care about spoilers, if I know a twist is coming I look for, and appreciate, any early warnings.

    As an example, there’s recently been released a game for the PS Vita. It had no screenshots, and the marketing blurb was useless (setting backstory), but after checking on Wikipedia, I learned it’s a Wizardry type game, so it’s probably something I’ll like.

    The only time it backfired was for Valhalla Knights: European users CAN’T select the gender of their PC. When I found that out I was furious.

  20. Mintskittle says:

    This post is not a Spoiler Warning Episode, or a new episode of Diecast, or even a videogame review.

    1/10 Would not read again.

    (/sarcasm)

    1. nerdpride says:

      Let’s just be thankful we’re not talking about DRM every day anymore.

      1. Steve C says:

        I like those. They are my favorites.

        1. Syal says:

          Me too, especially that DRM of the Rings comic!

  21. Humanoid says:

    Amazon troll reviews are funnier and therefore better. I score π/π for Amazon, 0/π for Metacritic.

    Usefulness? That’s an overrated metric. Badger out of Llamas.

  22. Retsam says:

    My favorite user review I remember seeing on metacritic was something like this: “It had some good parts, but [list of things they didn’t like]. Overall it just wasn’t as good as the first game.” (on, if I remember correctly, Portal 2) The punchline? 0/10 – for a review that included the phrase “had some good parts”. It’s the opposite of the traditional journalism problem of: “I’m going to give the lowest score I know: 3 stars out of 5”. 0/10 shouldn’t be “I didn’t like some parts of this” it should be “this game stands against everything I believe in and is an abomination against mankind”.

    I think the (assumed) thought-process behind these reviews is interesting; it’s not that “I should give a review that matches my opinion”, it’s “I should do everything I can to make it so that the game’s review score matches my opinion”. It seems to me that it’s an incredibly selfish way to do reviews, because it seems to be more about being “right” than actually providing useful feedback about a product…

  23. Brumblebumpkin says:

    I think I’ll stick with the corporate shill option.

  24. Mephane says:

    I’ve long since been ignoring reviews scores and instead look for actual information about how the game plays, not how the reviewer liked that. Sometimes that means interpreting subtle hints, or reading between the lines, but this method has helped me a lot in estimating whether I will like a game even if my opinion contradicts those of the reviews I read.

    Here is a constructed example of what I mean:

    Review says “In the end, too little direction, too large areas, a far too forgiving death penalty and regenerating health between encounters makes this game both too easy and at the same time distract too much from the actual fighting.”

    I read “This game offers a lot of freedom and places to explore, and little railroading. It encourages experimentation by not overly punishing death. Gladly health regenerates so this game is not about churning down healing potions by the dozen or hunting the next medpack.”

    In this example, the very reasons why the reviewer scorned the game are stuff that I typically like. The review was helpful even though I would disagree with the author.

  25. Galenloke says:

    I remember some reviews in Nintendo Power (at least, ages ago when I read them) had the same game reviewed by a handful of staff, but it also listed each reviewer genre preferences in some form of highest to lowest. That’s something I’d like to see more so it’s obvious if someone just doesn’t like genre X but reviewed it anyway.

  26. Tuck says:

    Apropos of nothing, “et al.” doesn’t need a period after the “et” because that is the full Latin word (meaning “and”).

    This is a random fact brought to you by a stranger on the internet.

    1. bucaneer says:

      Is it a bird? Is it a plane? No, it’s Properuseoflatinabbreviations-Man!

      1. Tuck says:

        Stuff that for a joke, Tuck is much easier to type!

  27. The Rocketeer says:

    I wish metacritic had a button to hack off the lowest quartile of user reviews, or the highest, or both, and show you the aggregate score of the revised data set.

    Or, failing that, just show a graph of how the user scores are laid out by volume of each particular score.

    Either of these would help someone looking for useful information to identify spurious scores, or scores with some sort of external motivation, like if a game is split between people who reviewed it very well and people who review-bombed it, or if a game has mainly average scores overall but a large number of people plugging in both 0/10 and 10/10 reviews on top of that.

    I’m not sure what it actually takes to post a review on metacritic, having never done so myself. If you have to make an account to post, then it wouldn’t be too hard to rate reviews themselves with a simple thumbs up or down, with the reviewer taking a weighted aggregate score of the thumbs of all their reviews. Then, filter the results of games by reviewer score; if reviewers with greatly negative scores are the only people giving a game 0/10 or 10/10, or if only highly-rated reviewers are doing either, that can say a lot about a particular title.

    It’s true that that system would be vulnerable to abuse in itself, but I think it could be largely prevented by making it impossible to just go to a user’s page (is that even a thing?) and see all of their reviews, thereby preventing someone from thumbing all of a particular user’s posts up or down to bias their aggregate reviewer score. This leaves potential trolls or circlejerkers free only to try and color the reviews of the particular title their looking at, which, for individual reviewers, should even out over multiple titles’ worth of reviews, or even in a single review with both sets of fanatics cancelling each other out and theoretically-reasonable people controlling the actual swing.

    This would STILL leave sabotage possible, because it never won’t be. But it doesn’t need to do that, it just needs to make the time and care necessary to do so greater than the desire to do so of the bulk of those who would be doing that. And regardless of what those vehement 0/10 reviews might indicate, most people just don’t care enough to do very much.

  28. kdansky says:

    > The latter channel is a noisy one, but I'm of the opinion that more data is always better.

    Somewhat meta, but there is a downside to too much noisy data: False confidence in their prediction. It’s basically the problem that plagues most of the financial industry: People claim they know what the market is going to do, and when we do actual research, we find that their chances of predicting anything are pretty much equal to throwing dice.

  29. daveNYC says:

    Netflix manages to come up with not horrible suggestions for movies and shows based on your preferences and viewing history. There’s no reason why Metacritic couldn’t attempt the same sort of matching to pick out reviewers who have similar tastes to you or even have Steam do something similar for suggesting purchases (if they don’t already).

  30. Daemian Lucifer says:

    “But if I was a professional reviewer, I'd probably have to. And if I wanted to keep my job I couldn't just eviscerate the genre when the latest release of Sound & Fury was dropped on my desk.”

    Yes,you could.You most definitely could.

    Also,damn the lack of edit button for making me recheck every link a brazillian times.

    1. Shamus says:

      Yahtzee is an exception. Do you think the guys at IGN could get away with it? Sure there are a couple of edge cases and the rare iconoclast site (RPS) but the vast majority of game journos don’t have that kind of freedom.

    2. paul says:

      On the up side, it seems that your use of whitespace and punctuation is improving. Benefits all around!

    3. daveNYC says:

      Flip side is that I’ve also seen reviews by him that are “I don’t like X. This game has X. Let me tell you all the ways the game sucks because of X.”

      It’d be nice to have a video game equivalent of Roger Ebert.

      1. Daemian Lucifer says:

        Ebert also had reviews boiling down to “I dont like X,therefore this movie with X is crap”.And Yahtzee often does go on long rants on why certain elements (dont) work,and how to implement them properly.So he is like video game Ebert,only with less experience.

        1. Aldowyn says:

          I’m pretty sure a video game critic with as much experience as Ebert had is literally impossible as of yet.

  31. Al says:

    I’m going to go against the current and say I like user reviews. I especially like the review that Shamus used as an example. It’s exactly what I expect from a user review.

    I expect a professional review to be a complete, broad and well rounded vision on merits (and demerits) of a game. I expect a user review to tell me, in a very straightforward and quick way, what that particular user found captivating (or detestable) about a game.

    The example review tells me that this user really likes driving aimlessly. And that he really likes “driving physics”. And the review tells me that this game accomplished these two things to his satisfaction.

    If I like driving aimlessly and “driving physics”, I’ll probably enjoy this game. If I hate these things, I’ll probably hate this game. If I don’t care about these things, I’ll have to read other reviews before deciding (which I should do anyway, because the point of metacritic is to get several opinions from several sources).

    This is what I look for in a user review: a very short description of what features a fellow gamer found satisfying. If these are the kind of features I enjoy, there is a good chance this game will be fun for me.

    Professional reviews are so broad-scoped that they cannot capture these little nuggets. Because they have to attend to a wide audience, they cannot know what features I value. By glancing through user reviews I can find if a game has the features I look for, and if they are well executed.

    ———————————————
    I do agree that numeric scores tend to be worthless, but I really like the idea of having something objective and comparable in a review.

    However, what I would really like to see is a kind of “revolving-door” scoring system. In such a system, each score would have a quota (for example: only 100 games can have the score 10, 200 could have the score 9, etc). Every time a new game was added to a higher score bracket, another game had to drop to the lower bracket.

    In such system, the score wouldn’t reflect the absolute value of a game, but rather its relative value in comparison with the other games.

    Obviously this system is very hard to implement: how do you compare fairly a Zelda with a Skyrim?

    1. Syal says:

      how do you compare fairly a Zelda with a Skyrim?

      You compare Zelda with games similar to it (action adventure, I think?), compare Skyrim with games similar to it (fantasy sandbox), and then weight them based on which genre is more popular.

    2. Aldowyn says:

      A good professional review should have those little nuggets too.

  32. Galad says:

    I don’t look at reviews, except the occasional escapist one, but I thought I’d drop in to say I love how you’ve inverted the ‘stars’ marking in the image at the top of the article, to underline the stupidity of it all :V

    One place I definitely look at the score is imdb – generally, if it’s below 7, I need not bother. If it’s between 7 and 8, it’d likely be good for an hour or two’s entertainment, but I won’t remember it by next week. And if it’s 8 or above, we have a potential hit, ladies and gentlemen. That, and the top user review – those are generally VERY helpful.

  33. Avatar says:

    For me, they serve the same purpose as the canary in the coal mine. Low user scores versus high reviewer scores? There may be serious stability problems with the game, or it may be a Stanley Parable situation where there’s a massive disparity between expectations and what was delivered (maybe better to say a Brutal Legend situation?) Or it could just be noise, yeah. But I’mma read some of those reviews to see if I can find out why.

    Conversely, sometimes you have low professional reviews and high user reviews. This might indicate a niche game that people who like the genre might get a lot out of, but which are more or less impenetrable to a filthy casual (like, say, your pro reviewer who has to play for a week and move on). Or it might just be a shill crying out into the emptiness. Again, reading some of the reviews can help a lot.

    If they’re universally low, well then.

    The fact is, though, this information just ain’t available through most channels. I’ve read plenty of professional reviews where the dog just didn’t bark – where significant flaws in the game were completely overlooked because the reviewer didn’t get into it enough to care one way or the other, gave it a safe 8.x and moved on with his life. Conversely, I’ve gotten hundreds of hours of entertainment out of games that were lucky to get a six of ten from the pro reviewers… So, it’s like, do I trust it as a score? Not really, but you don’t trust professional reviews as a score either, so if I have to make decisions based on noisy channels anyway, I might as well have two of them, with more information available if I want to drill down.

  34. Jeff says:

    This has probably already been said, but it’s worth repeating: Scores are worthless.

    They’re only useful if you know the reviewer shares similar tastes with you.

    I think the only useful thing from many user reviews are their comments – if someone likes a FPS game because it has an ultra-realistic super-detailed driving model, I know I’m probably going to be insanely frustrated at some point. If someone hates a RPG game because they’re swimming in a ton of dialog and the tactical combat has too much details, I’m probably going to be interested.

    Their ratings are worthless, their descriptions are not.

  35. Daemian Lucifer says:

    I decided never to care for user ratings myself once fallout 3 was out,and I read a “comprehensive” game review on gamefaqs.One of the points was graphics,and it read something along the lines of “Its the same outdated graphics from oblivion,0/10”.A two year old graphics is outdated?Good bye user reviews.

    Now,I decide on whether to play(or watch)something based on a few critics I found that have similar tastes to mine.Granted,I still make an exception if someone I know recommends the stuff,if we share the taste about that.

  36. Otters34 says:

    I think classifying the divide as either

    “Reviewers continue to rate based on gloss and spectacle while existing fans insist on narrative quality.”

    or

    “Professionals continue to be even-handed while users become increasingly entitled and demanding.”

    is a little over-reductionist and a false dichotomy.

    A lot of people who play those games don’t WANT better narratives, or more compelling, complex characters, or almost any of the things you see as important to improving those titles(I’m not near smart enough to say if they are what’s needed). They are just bored by being put before the same mechanics and kind of story they have been fed time and again. They want new things, not necessarily better ones. Thus, to my mind, a large amount of the ire and distrust.

    1. Shamus says:

      Those aren’t the ONLY two positions, those are the extremes. I was trusting the reader to extrapolate from there, rather than laboriously enumerating every band on the spectrum.

      1. Otters34 says:

        …Of course. Damn. Sorry.

  37. Phantos says:

    I used to believe in games criticism. I used to put value in what someone would write for a game in a professional context, and sometimes even look to user reviews for a game’s worth. I even wanted to be a game reviewer for the longest time.

    But I think the paid scores, “edgy” internet tough guys, pseudo-intellectual slobs with low standards, and rabid, barely-literate brand prostitutes has poisoned that well. The only thing professional and user reviews have in common is that they’re both a council of toddlers, crying and pooping over what hieroglyph they should assign a pile of infested, rotting potential.

    I give it 9/10.

  38. Neko says:

    I think my ideal reviews website would outright refuse to give you any aggregate score until you had logged in, read a few individual user reviews, and clicked a button indicating whether you agree with this reviewer or not.

    We could do some clustering on reviewers, and assign readers of the site to whichever clique seems appropriate for them given their own personal biases. The overall score you see would reflect reviewers in your clique much more strongly than any outside of it, and perhaps you could see a little graph at the bottom showing you which review scores are from reviewers you’ve liked, and which aren’t.

    If the publishers ask for some scores to put on their boxes… well, tough. The best we could do is a bunch of complicated statistics.

  39. bloodsquirrel says:

    I once said somewhere- might have been here- that user reviews often provide useful information, but only if you’re willing to reject the impulse to try to flatten down all of your data into one number. User reviews aren’t useful if you’re just trying to average them the with the professional reviews to reduce the margin of error. What they are very useful for is shedding light on issues to professional critics are too “professional” to bother with.

    A professional average of 8.2 and a user average of 8.0 don’t “average out” to a true score of 8.1. What they do is tell me that the game is largely meeting expectations. If the professional average is 8.2 and the user average is 4.2, then it tells me there’s obviously a serious dissonance between what the critics are rating the game on and what the fans expected out of it, and should know which side I’m on before buying the game.

    If a game is getting review bombed, then there’s probably a reason for it. If it’s a bad reason, I can choose to ignore it. But I’d like to at least know about it.

  40. Andrew_C says:

    Apologies if someone has already mentioned this, but on Metacritic and Amazon I usually look at the neutral user reviews first. IMO they are often more informative than the positive and negative ones.

  41. As far as I’m concerned, reviews are meaningless unless I can access more reviews by the same person. Until I can gauge the *reviewer*, I can’t gauge the review. Ergo anonymous reviews are completely, utterly worthless.

    The very best reviews are ones where someone I know tells me about the *specific* things they like/dislike. Good or bad doesn’t much register with me, but “huh, that feature sounds interesting” really, really does. Even if I wind up not liking something very much, if it has provided an INTERESTING experience I consider it money well spent.

  42. General Karthos says:

    Shamus is one of the few people I trust to review video games who is harsher on them than I am. He loves Master of Orion II. I love Master of Orion II. There are games he didn’t particularly like that I quite enjoyed, but that’s because I don’t tend to nitpick until I play through a game for a second time. (If I find myself nitpicking the game as I’m going through [Fallout 3 for example] it’s a clue that it’s not a very good game.)

    I like the Steam Review method of “recommended”, “not recommended”, since I can view my Steam friends’ reviews and knowing whether they like similar things or not, I can determine whether this is the sort of game I want to invest my time in or not.

    I also like reading the gameplay stories like “Josh Plays”, or PC Gamer’s “Strategy Chronicles”, or watching “Let’s Play” videos on YouTube, because that gives me an idea if the play is the kind of thing I’d like or not. I bought Crusader Kings II based on T.J. Hafer’s “Crusader Kings Chronicle” on the PC Gamer Website, and I bought “The Walking Dead” because of the Spoiler Warning of it.

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun. Your email address will not be published. Required fields are marked*

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>

Leave a Reply to Steve C Cancel reply

Your email address will not be published.