Its worth reading both posts, but here's the synopsis: Analysis of the reviews of all three Kindle products (breaking them down into a pie-chart of 1-5 star ratings) shows an increase in the percentage of one-star ratings over time. Conclusion: Kindle customers growing more dissatisfied over time.
There are a number of reasons this is erroneous. Many 1-star reviews are by non-owners ("I'll never by Kindle because..."), the early adopters more passionate than others, they are different products, etc.
However, the more interesting thing to me were these comments by Seth:
Amazon reviews never reflect the product, they reflect the passion people have for the product. As Jeff Bezos has pointed out again and again, most great products get 5 star and 1 star reviews. That makes sense... why would you be passionate enough about something that's sort of 'meh' to bother writing a three star review?
...
The Kindle has managed to offend exactly the right people in exactly the right ways. It's not as boring as it could be, it excites passions and it has created a cadre of insanely loyal evangelists who are buying them by the handful to give as gifts.
I think the lessons here are to Ignore graphs intended to deceive, and to understand the value of the negative review.
Point being that the negative reviews have value as well. For one thing "there's no such thing as bad publicity" (not true of course, but there IS a downside to NO publicity at all. The sound of crickets chirping is not accompanied by the sound of cash registers ringing). Another thing is that the negative reviews let you know who *are not* your customers.
So, what's this got to do with games?
Well, the industry puts some stock in review aggregators like Metacritic, and others are claiming this
may not be indicative of a game's potential sales.
However, Seth's post made wonder whether we're looking at the right thing. Take the following fictitious graph:
The vertical axis represents number of reviews, and the horizontal axis represents 1 through 5 star ratings. Series A represents what I call the "passion trough" - reviews polarized toward 1 and 5 star ends of the spectrum (Seth's point about passionate reviewers). Series B represents the opposite, what I call the "Ho Hum Hump" - reviews clustered in the 'meh' range. Each of my fictitious products get 150 reviews
So, which is preferable?
Well, for one thing, it depends what you consider a "3" to mean. If that's a passing grade, then series B is preferable - two thirds of people gave you a passing grade. Series A gets only just over half.
Traditional thinking would be aiming to satisfy this. Do the best you can, for everyone - even if it costs you some of the more passionate customers. Better a 3-star with everyone than a 5-star with only a few people. (Some of the tradeoffs we've seen to 'mainstream' titles might lead you to call this the 'compromise chasm' :-)
I think this would be the wrong conclusion though.
For one thing, per Seth's point, I'm guessing the reality would be that Series B would get far fewer reviews, all other things being equal. It inspires little passion in people. Whereas A is more likely to inspire reviews - both good and bad.
Secondly, For Series A, on third of the reviewers are VERY passionate about the product, and therefore perhaps likely to buy it. For Series B, all those people giving it the middle of the road review are also people with a lot of alternative products to choose from.
Someone will need to crunch the numbers to determine if the above is indeed the case. If I'm right though, then we're looking at the wrong thing by looking at average score. We should be looking at standard deviation, total number of reviews, etc - if looking at Metacritic at all. Not to mention looking at user reviews vs press reviews, but that's a whole other topic.
My gut tells me you are way better off with the trough than the hump.