My Photo
Blog powered by Typepad

« GDC09 - Part 2 - Improvisation presentation materials | Main | The Next Generation of Player »

April 02, 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

IGN has a better summation of the rant article than the Joystiq grab bag:

Have you emailed anyone at IGN to correct the article? If not, you should...

Was nodding my head throughout this talk. The thing I notice about the 10 or 100 point system is that scores from 0-50 are typically underused. This probably has as much to do with rating inflation as it does with the human psychological condition. My brain is a lot more open to a 3/5 star restaurant than it is to a 60% rated restaurant. When I look through my Zagat restaurant guide, if the food is rated under 20, my brain automatically disqualifies them even if the food is actually 'fine' but not 'great.' I would even try a 2 star place but 40% is psychologically inhibitive. Even in the 5 star system - a quick sampling on shows that there is a higher skew of 5 and 4 star ratings than the 1 and 2 star ratings (although I hear yelp extorts restaurants suppressing bad reviews for a fee). And this doesn't even address the issue that review scores often fail to reflect different target audiences. I.e. gamespot reviews games from the hardcore p.o.v. and to hardcore gamers, maybe random title X deserves a 4/10 but it really is a 8/10 for the intended audience.

I was surprised that you made no note of the "target audience" of the review.

Board game reviews on BGG are a perfect example:

Basically, how much I will like the game has a lot to do with what kind of games I like to play, and basically with my own reviews of existing games.

The BGG example uses generic stereotypes of games (which may not be accessible to a non-gaming outsider), but at least get the point across really well.

The next step I'm always looking for is the Netflix system - I should get review scores based on my own recorded preferences. There are some FPS that are super-accessible for non-shooter fans and they should really explore them, and some may be perfect but only for die-hard shooter fanatics. I think this dimension is sorely missing from the current review system.

Nat Loh: "40% is psychologically inhibitive."

This brings up an interesting problem with the 100 point rating systems - schools generally also use 100 point rating systems, and grades below 70 are considered failures. Thus there's an in-built aversion to anything below that, because we (or at least me) pre-judge a sub-70 rating as being a failed game. 70 reads to me as "mediocre" thanks to school, but logically I would think mediocre would be more like a 50 (i.e. right in the middle of the scale.) So maybe to some extent the scale is already positively shifted , because we only have 30 points of "usable" scale out of the 100 to judge something positively.

That would require the games press to agree on universal practices. A welcome idea, but it is something that is unlikely to transpire. Too much enthusiast site identity is wrapped in review scores.

Nonetheless, the argument for a simplified review scale has been bandied about for years within the game critics community. As publicized during Shawn Elliott's epic review symposium ( most dislike assigning review scores. Many critics would actually prefer that review scores were banished all together in favor of more thoughtful assessments of games. Of course such experiments (Computer Gaming World) have ended in disaster. The problem is that the most vocal game players--core consumers--demand the granularity. A site's scoring system is so embedded in its readership that it would be nigh impossible to wean audiences from an 100- or 10-point scale without a sever backlash. Few are bold enough to risk it.

@ MJ & Clint: All scores are abstractions. There can't be anything wrong with anyone's choice for how to abstract value. The larger issue is that people are still afraid to make honest statements with review scores, regardless of the scale. And that reviewers are afraid to write from an explicitly subjective point of view. I could make a pretty hard argument that Far Cry 2 is a 10 and an 1/10, I could write for both sides if I could hide behind the veil of objectivity. Objectivity is fundamentally dishonest and that's the larger issue with reviews, as far as I see it. The abstractions are only vague and useless because the thinking put into their application is dishonest. If reviewers could speak only for themselves and score according to their own tastes there'd be nothing wrong with a 100 point scale. If I could have reviewed FF12 and given it a 4.7 instead of having to offer due dilligence about production value, graphicss, "fans of the genre," and consumer value, the conversation might actually move forward. That would require a much deeper editorial overhaul than simply rearranging an arbitrary valuation abstraction.

Clint, a couple points:

1) I thought the talk was great. While I agree the 5 minute format is hard, it also forces one to boil an argument down to its essence, and I think you hit it on the head.

2) Only negative feedback I have is that I'd like to have heard a call to action. Some ideas on how to get the industry to go 'five star'. A revolt? Torches and pitchforks mobs to whatever sites still to 100% reviews?

3) It's false logic, but it could be argued that 85% in the last 1/3 of the period just indicates that we are getting better. I don't beleive this at all, and think your conclusion is right, but if we're playing follow the logic... (e.g. If we saw data that the 18 of 20 of the top years of airline safety were in last 20 years - I'm making that up - we're probably think we were getting better, not that standards were being lowered).

- Doesn't '5 star' suffer from the same issue, but just coarsen it a bit? e.g. I just read a car magazine roundup where they use a 5 star system, but there's still a perception that 5 star is hell of a lot better than 4 star...

I agree with Kim, I would like a proposed "plan of action" here. I know you don't have all the answers or anything, but I think discussing it might be a good idea.

We have the ESRB to make sure we have a united ratings system (for their territory, anyway), but who would be in charge of making these rules and semi-enforcing them? As your quotes from the IGN comments point out, the consumer base doesn't really care, so I don't think the pressure will come from there.

Actually, I can't think of a single unified review scale in existence. Maybe it's all just a pipe dream. The way journalist review games is kind of like a personal "stamp". Like Ebert and Roeper have thumbs up/down, Famitsu has the dreaded Quadra-10 point review.

It would be nice if we could invent an organization (perhaps supported by publishers? Only "good standing" members gain access to early game info?) that could instill some review rules... like the 5-star system, but with the potential for other rules as well.

@Mike All scores are abstractions, fair enough, but they also look different and mean different things to our stupid human brains, due to context and training and history.

"60%" and "3 out of 5 stars" read so amazingly differently to me at least, due to the aforementioned seeing 60% or 6.0 as a D on a test or a report card, versus seeing "***" meaning "a decent enough thing -- nothing special but not bad!" in restaurant/hotel/film reviews.

That "***" and "60%" are aggregated together into the same pot by aggregate sites doesn't help anything either. Metacritic argues that "3 out of 5" equates directly to "60%" in game reviews, but I'd wager that sites which use star ratings dip into the 2-3 star range more commonly than 10 or 100 point sites dip into 40-60% or 4.0-6.0 out of 10.0.

I don't know how the numbers work out on any of this because I am lamely writing off the top of my head instead of actually investigating anything, but it's something that's often bothered me based on my own known internal bias when looking at "***" and saying to myself "hm not bad," and then looking at "60%" and saying "eugh, yikes."

I've been thinking this over and another idea I'd like to propose was to actually use a 6 star system. Why? Because I can EASILY do the math to convert a 5 star rating into a 100 point rating. Using 6 stars... what is 1/6 of 100? I don't know. I could figure it out but generally it's not something I keep stored in my fractions memory bank. I know the square root of 2. I know pi up to several decimal places but 1/6 is just not a fraction I have cached by default. This mental barrier is enough to stop me from even thinking about "what would this score be in a 100 pt scale?" whereas in a 5 point system, the math is so simple I can calculate it subconsciously. I also found through using Yelp that one more star is useful. I frequently find myself thinking about the 3.5 star rating because some restaurants just fall into that "good, I like this, but it's not awesome."

my star interpretation goes something like:

one star - bad, doubt anyone will like it.
two stars - fair, maybe someone else might like it.
three stars - good! a majority of people will like it.
four stars - great! a lot of people will like it.
five stars - awesome! you'll be hard pressed to find someone who doesn't like this.
six stars - mind blowing. generational greatness personified.

and there is also an impied zero stars for "WTF, absolutely no one, even the developers themselves could ever like this game"

one other thing. Going through the reviewing process and using 5 star system on, I constantly think that "I can't use the 5 star rating" because that is the top rating and I can't go any higher than that. Having the 6th star, I feel like I have more flexibility in my praise. It's sort of like how another poster mentioned we have 3 levels of "good" in the current ratings setup. We effectively maintain the 7,8,9,10 paradigm with 3,4,5, and 6 stars. 2 stars covers not very good games. 1 star covers bad games. 0 stars (if necessary, should be used as sparingly as the 6th star) covers epic mistakes and blights upon the gaming world.

I agree with Kim. Games are getting better. If you look at the first 30 years of movies, how many of those are even watchable today? Critics like to say nice things about NOSFERATU and THE CABINET OF DR. CALIGARI, but they're really more interesting than good.

There's a world of difference between the original PRINCE OF PERSIA and HEAVY RAIN, say. They're not even vaguely comparable. The game engine technology has advanced unbelievably, and so have the AI's, but storytelling and understanding of what is a game have also advanced, and continue to advance. We're still under Moore's Law, if you will.

So while, yeah, there is always some bias in favor of recent games -- just as the Oscars tend to feature movies that came out in December, not March -- but a lot of it is valid.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been posted. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Your Information

(Name is required. Email address will not be displayed with the comment.)