Why the Industry Shouldn’t Rely on Metacritic

An interesting op-ed piece on Gamasutra discusses how the industry tries to use Metacritic as a benchmark of quality…and shouldn’t. On the one hand because it doesn’t represent sales, which is sometimes claimed, on the other because it doesn’t actually represent critical reception.

If Metacritic worked as objectively as Sir Galton’s analysis of every piece of data, we would quite possibly have an indicator of quality, and therefore an accurate measure of our gaming cow. Sadly, it does not. Metacritic does not include the entire data set, only those selected by Doyle:

“This overall score, or METASCORE, is a weighted average of the individual critic scores. Why a weighted average? When selecting our source publications, we noticed that some critics consistently write better (more detailed, more insightful, more articulate) reviews than others. In addition, some critics and/or publications typically have more prestige and weight in the industry than others. To reflect these factors, we have assigned weights to each publication (and, in the case of film, to individual critics as well), thus making some publications count more in the METASCORE calculations than others.”

I get why he does it, and he likely had the best intentions, but it doesn’t work. The critical view is subjective. Doyle’s determination of the critic’s value is also subjective. So we are really getting a third generation facsimile of a subjective view of the quality of a title. If you factor in the uncertainty of the gallant, but flawed effort to convert A to F scales to numerical equivalents, Sir Francis Galton would certainly call foul.

Share this article:
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments