Correlation vs Causation, and the MetaCritic MetaQuestion
This piece on Gamasutra offers an interesting take on the metacritic issue, concluding that review scores are among the least important of the factors affecting game sales.
The same subject came up at a round-table discussion at MIGS that was lead by EEDAR's Jesse Divnich.
An interesting snippet from the Gamasutra piece that is worth chewing on a little:
Analyst Doug Creutz says "We believe that while Metacritic scores may be correlated to game quality and word of mouth, and thus somewhat predictive of title performance, they are unlikely in and of themselves to drive or undermine the success of a game"
This highlights a point I brought up at the MIGS round table: That there's a difference between correlation and causation. Scores can be correlated to sales, but not necesarily affect them.
The correlation is fairly straight forward. Most game reviews are written by reviewers who fit the mold of the "typical gamer" if there is such a thing. A high meta-critic score is a small sampling of people who fit the demographic of the customer. "9 out of 10 people gave this game a thumbs up". These reviews serve as this indicator *even if not a single consumer ever reads them*.
Now, whether the average consumer consults these reviews and uses them to decide on a purchase of one game over another, and how that factors in versus everything else vieing for their attention is another matter. I have no idea whether there is any causation here, but it is certainly a more tenuous assertion than the correlation above.
Does it matter though? Of course, and here's why.
If you beleive in the correlation, then you can use meta-critic as an indicator of sorts. The publishers seem to be doing this, and there has been plenty of talk about developers having bonuses tied to MC scores and the like.
Now, using carrots'n'sticks motivators for developers, and tying those to MC scores is being done to drive behavior, I assume. It is essentially the publisher telling the team "Please go do what it takes to acheive 90% or better".
If you beleive only in the correlation, that the reviewers are essentially a sample group of gamers, then you focus on building a great game that they and everyone else will find enjoyable. [A cynic like me would say you also focus on building a marketing frenzy that will have everyone salivating for the title, so that reviewers are ready to write their 98% review before they've laid hands on the game - but again, you are doing nothing for the reviewers that you wouldn't do for the consumer as well]
But if you beleive in causation, then you focus part of that effort in gaming Meta-Critic itself. You go out and try to influence reviewers, beleiving that gaming a high score out of the system will result in high sales.
So the meta-level question about metacritic is whether you beleive it serves as a focus group, or as a marketing tool. I beleive its the former, but choose your own opinion and proceed accordingly.
Addendum: As I was writing this I had an interesting epiphany: If viewing MC as a 'focus group' of sorts, then it would be interesting to treat as such. Do games score extremely high with a subset of the focus group and low with another? And if so, how do those fair vs those with a more homogenous set of scores. Does an 80 MC title with scores ranging from 60-100 fair better or worse vs one with scores ranging from 75-85. In short, does MC standard deviation indicate something? Hmmm.... time to curl up with Excel and a glass of wine...
1 comment:
As someone who has been critically reviewing indie games, who don't have such a manic drive for retail revenues, I think its positive that people are waking up from the dream of quantifiable quality, if there ever was a more profound oxymoron. See The Trap for why this disabusal of the numbers myth is a good thing.
Post a Comment