Marthélize Tredoux: Yet more on wine scoring

By , 27 August 2015

Comment

2

Bottles bearing #WinemagRating stickers.

Bottles bearing #WinemagRating stickers.

Forgive me for raising the old chestnut of wines scores but there’s been a lot of chatter around the subject just recently. Some in anticipation of Tim Atkins’ SA Report that was released yesterday, some around Platter’s (the tasting panel have been at it for a while now) and some just general opinion. As expected with this discussion, there was the usual amount of agreement, disagreement and some scorn for those seen to be questioning the status quo.

A quick Wine Scoring 101 for those who haven’t been paying attention: wine scores are assigned by a critic based on that individual’s evaluation of a wine. Scores are ultimately qualitatively subjective in nature. Ratings systems include: the 100-point scale, used internationally and popularised by US critic Robert Parker and Wine Spectator; the 20-point scale, taught locally in some wine courses and with a more technical feel and the 5-point scale usually denoted as stars, as used by Platter’s.

Different rating systems have varying ranges with varying descriptors. For example, Winemag’s own ratings describe 96-100 as “Extraordinary, profound” and 93-95 as “Outstanding”, while Wine Spectator’s breakdown is 95-100 “Classic: a great wine” and 90-94 as “Outstanding: a wine of superior character and style”. Clearly, the role of subjective interpretation of these terms is heavily at play. Most can appreciate that wine critics and judges score as objectively as possible though, taking into account technical aspects and stylistic issues that even the geekiest of wine nerds will struggle to appreciate without a decade or three of experience.

There was a particularly interesting thread on Twitter following one of Tim Atkins’ tweets (you can follow the original thread here. It attracted comments from a number of wine writers and judges, including Jamie Goode of Wineanorak.com and Wall Street Journal wine columnist Will Lyons to name but two.

It all became bit chaotic to follow, with more people chipping in and the original thread splitting into different discussions but what struck me overall was the sense that even on bona fide (meaning people who do this often and – at least partially – for a living) wine critic level, nobody fully agrees on how to score wine.

Opinions varied, some exasperated that the supposedly old discussion is being had again and others saying it’s new for South Africa. There were a number of suggestions for the implementation of other ratings systems than the 100-pointer or even a split with alternate ratings for consumers and the 100-point system to be kept solely for judging.

I find the aversion of some more established critics to this debate both worrying and unconstructive. By questioning the use of the 100-point system in terms of its usefulness for the consumer vs the industry, nobody is questioning the role or relevance of the wine critic. Discussing the relevance of a yardstick set in the 1970s though seems quite necessary: the landscape of wine – both locally and globally – has radically changed and the consumer is continually evolving, so why is the thought of taking a fresh look at scoring so unwelcome?

Do I have a perfect solution? No. I don’t expect anyone has. I just want the discussion around it to be open and constructive and not simply quashed by those who disagree that there should be a discussion in the first place.

The most useful comment in the aforementioned thread was that if you follow wine critic scores, it is important to find one with a palate that aligns with your own. So far I’ve worked out that a Platter rating of 5 stars virtually guarantees I won’t like it. Similarly, my sweet spot for Christian’s Winemag scores are around the 89-92 point mark, rather than 93-95. It’s a process, but with little to no experience in the matter, it is one that consumers often skip entirely, opting instead for the highest scorer (within their budget). Forget blind tasting. That’s blind buying. And if that’s the status quo, then why bother with the oft-laborious process of scoring at all?

  • Marthélize Tredoux is the co-owner and editor at Incogvino. By day, she helps SA wineries sell their wine in the USA. She won a wine writing award once.

Comments

2 comment(s)

Please read our Comments Policy here.

    Marthelize Tredoux | 1 September 2015

    Hi Angela

    Well. Yes. Since I am (as I mentioned) exclusively referring to my personal experience here, I’m quite happy with my statement. Also, I am obviously not excluding entire producers on the fact that they’ve received a 5* rating. Also, I did say “virtually”, which in the realm of opinion should give me more than sufficient leeway to get away with it. 🙂

    I’m now curious to test myself with a blind tasting of a few wines of each star-rating, to rank them purely based on my preference. If the 5* comes up tops, I’ll happily retract my statement here. Until then, when faced with two similar wines (by the same producer or not), my (admittedly humble and non-professional) experience has shown so far me the 5* wines are just not to my taste, most notably the reds. This is probably because what the judges (especially a panel of) look for in a 5* wine is not what I look for in my glass, right now. Which probably rings true for many consumers who buy wines for right now, rather than 5, 10 or 15 years from now. But they often STILL buy 5* (or 4.5 or 4*) wines simply because it “must be better”. Blind buying. That’s my problem right there.

    The dilemma nobody seems to care about comes when scores become misrepresented in the public’s understanding thereof. E.g. Platter describes 2* as “Pleasant Drinking”, 2.5* as “Good everyday drinking” and 3* as “Characterful, appealing”. Which is absolutely fine – a mass of fantastic wines are found in those ranges. But who actually knows that? The public often likens star-ratings as a universal. 2.5 is half of 5 so it must be average. Who’d stay in a 2* hotel when they can have a 4* or 5*? So the idea of drinking a 2* wine is a put-down (even though it shouldn’t be). I don’t believe I’ve ever seen a 2* Platter rating sticker stuck on a bottle. No surprise as to why.

    Angela Lloyd | 1 September 2015

    That’s a sweeping statement about Platter 5* wines, Would you include Alheit, Sadie (both producers), Mullineux, De Morgenzon, Jordan, Sijnn – oh & so on across a wide variety of wines & styles – among those that ‘virtually guarantees’ you won’t like them?
    To my more relevant point. I don’t score wines when I write about them, prefering a description. However, with Platter, scoring is necessary and, in the 5* tasting is especially useful. This year, with all wines rated 4,5* in the line up, we started knowing 90 points is the minimum; (tasters who believed a wine warranted 5* would have rated it a minimum of 95 points on the database, but this info wasn’t revealed at the 5* tasting.)
    We knew if we as a panel wanted to give a wine 5*, it had to score a minimum of 95; if we felt it deserved to be included in the line up for wine of the year, a score of 97 or 98 was required. At the other end of the scale, if we felt it shouldn’t even have rated 4.5*, our objection was limited to 90. Between 91 and 94 depended how strongly we felt about the 4.5* rating. All scores were reached by consensus.
    In this instance, because we had set parameters, scoring really meant something.
    When it’s applied by individuals with their own ideas of what a particular score means, it means very little, especially whether the individual reading it will like the wine!

Leave a Reply to Angela Lloyd Cancel reply

Your email address will not be published. Required fields are marked *

Like our content?

Show your support.


Subscribe