Pages

Sunday, July 31, 2011

What Really Determines a Game's Metascore?












Have you ever noticed that, with occasional exceptions, the "triple A" games released by the big publishers always end up with generally favorable review scores? And that the more unconventional, quirky games, and those released by smaller publishers are far more likely to wind up with "mixed" review scores? Ever notice flawed-but-deeply-brilliant games scoring lower than big budget, generic clones? Ever notice user scores significantly lower or higher than a metascore?

Why do certain games tend to get higher or lower scores? What thoughts tend to run through a reviewer's mind while they're playing a game and writing a review, and how does this influence the overall trend of the metascores? More after the jump.

Game reviewers are not objective quantifying machines. It's impossible to come up with a perfect quantification of a game's quality because a video game will always leave subjective impressions on the reviewer. Every review is always going to be influenced by subjectivity. No matter how objective a reviewer tries to remain in a review, certain games are always going to score in a certain range. Not necessarily because of the quality of their content, but because of how easy they are to review.

Those AAA games, for instance, go through so much play-testing that they're able to weed out most of the superficial flaws. They concentrate on streamlining the experience by making it fluid, easy to understand, and easy to get into. As long as a game doesn't present any noticeable or major flaws, and as long as it's well-streamlined, then it's pretty much going to score at least a 75/100. This is because professional reviewers appreciate streamlining; it makes their jobs easier.

Professional reviewers play a lot of games, probably going through them quicker than the average gamer. They don't always have the time to explore every crook and nanny of a game, and they're often working under deadlines. When your boss wants a completed review by the end of the week, and you're struggling to understand the game mechanics because they're unconventional and not very well-explained within the game itself, then that's going to leave a bit of a bad impression because it's standing in the way of completing the game and finishing your review.

Meanwhile, the typical gamer has a little more time and freedom. They're probably more patient with their games, not feeling compelled by a deadline, and might take the time or make the effort to get the hang of weird controls instead of getting hung up on them as a criticism. A typical gamer is probably also trying more to have fun, wanting to justify their purchase, so they may concentrate more on the positive aspects and forgive or overlook the negatives. Hence why less-polished games can sometimes end up with a higher user score than a critic score.

But, by the same token, there's also a certain number of users who just don't know what they're talking about and act out irrationally. They get mad at the game and bomb the score with a 1/10. Users don't feel as compelled to remain neutral or objective and sometimes let emotions guide their scores more. As if adding a 1/10 on metacritic is payback for getting stuck on a level and being totally pissed off with the game. Professional reviewers try to give "fair" scores,  so sometimes the user score ends up being much lower than the critic score.

It's also much easier to tally and quantify flaws than it is to quantify fun. As an exercise, try to describe how much fun the gameplay of Shadow of the Colossus is to someone who's never played it.  You can describe the nature of what the game is like and what you do in it, but odds are you'll never adequately express what it feels like to actually play the game. Everyone can relate, however, to bad cameras or weird interfaces, which sometimes results in flaws getting heavier attention in a review. Sometimes a reviewer ends up listing every possible flaw and then scaling the score down with each one, resulting in those less-polished games receiving lower scores (even if the actual game content is stellar).

Metacritic assigns greater weight to bigger reviewing sites like GameSpot and IGN, meaning that their scores affect the total metascore more than smaller sites. Since those big sites are generally the more "professional" review sites, they tend to be influenced more by the appeal of streamlining and polishing. Hence why the overall metascore tends to be higher for well-polished, streamlined games. Even if the game doesn't do anything unique and just feels like a rehash of a more interesting game you've already played. 

These thoughts may not be true in all situations, but they're something to think about whenever you look at a collection of reviews. Reviews don't always quantify the true quality of a gaming experience; sometimes they're a reflection of the reviewing experience. 

No comments:

Post a Comment