Tim James: On wine scoring systems

By , 26 September 2016

Comment

10

92I’ve been doing some very basic arithmetic (it has to be basic – it’s so long since I was at school without a calculator that I can’t even remember how to do long division). My foray into figures came when I noticed Christian’s recent response to a comment about the relationship between scoring systems. One thing he said was that “The supposed merit of the 100-point system is that it gives the critic extra gradations which aren’t available to him when using the 20-point system”.

Is that the case? No system actually seems to work as expected. Everyone knows that the 100-point system is really a 50-point system – with scores from 50-100. In common practice, however, it seems to me, it’s usually a 20-point system: from 80-100. But Michael Fridjhon and his Trophy Wine Show use a wider range: for him a gold medal rating starts at 90 (presumably the same as Christian’s 93), and a bronze medal (“good to very good” wine) is from 70–80. This system would support Christian’s argument about “giving the critic extra gradations” much better than his own scoring does… I don’t think Christian ever descends into the 70s, let alone the 60s. If I saw a wine scoring 81 from him I’d assume that he thought it extremely dire stuff, not a silver medallist as for Fridjhon.

Christian validly argues that “if 100/100 represents ‘perfection’, then it necessarily should become more and more difficult to attain, the closer you get to it”. That is, presumably, one should only very rarely stray into the lofty altitude of 95 points and above – in which case it would seem that for most users the 100-point system could be seen as basically a 10-point system (85-95), apart from rare ventures above or below. This range seems to me the almost inevitable one for wines scored on this website.

Tim Atkin is another adopter of the American 100-point system. His 2016 Special Report on South Africa rates over 1400 wines, and only a few get 83 or 84, with 85 being his “bronze medal lower limit” (admittedly he doesn’t taste deep into the lower reaches) and not a great many get over 95. So, not exactly a whole lot of “extra gradations”, is it? I wonder if those who use the 100-point system as a 20-point system aren’t likely to start wanting to use half points, thereby turning it into a 40-point system…

Meanwhile, in its commonest incarnations the 20-point system appears to be a 10-point system, given that virtually no one ever goes below 10. The World of Fine Wine magazine does, however, according 7.5-10 points to a “sound but dull or boring wine of no character or appeal”; for most of us wines like that would score 11-12 points, I’d guess. For that magazine the 20-point system is really a 30-point system, perhaps, while for most others (given half points) it avoids being a 10-point system and reverts to being a 20-point one!

If all Tim Atkin’s wines scored between 12 and 19 on the 20-point system, including half-points, instead of between 83 and 98, there would be exactly the same number of graduations  (if my arithmetic is correct). So why the urge to abandon the 20-point template if only a few, like Michael Fridjhon, are going to avail themselves of the 100-point system’s potential?

As to Platter, which Christian says “awards 5 Stars to wines rating 95 points” – well, yes, but also no. Basically, Platter uses the 5-star system (actually a 10-point system, given the use of half-stars), and the 100-point system is not really for rating: it’s there in the background serving a largely administrative function.

The mind reels at the inevitable confusion and communication failure in all this.

Perhaps we should just abandon the whole scoring nonsense. What a good idea!

Tim James is founder of Grape.co.za and contributes to various local and international wine publications. He is a taster (and associate editor) for Platter’s. His book Wines of South Africa – Tradition and Revolution appeared in 2013.

Comments

10 comment(s)

Please read our Comments Policy here.

    Chris | 21 April 2017

    For me the problem with scoring wines is that they are not done blind. As soon as they see a great Bordeaux house only the very brave would give it a score in the 80s. What is the point of a 100 scale if you only score between 85-100? And price is also factored in. So a 95 point Lafite in a 2000 vintage would not be of the same quality as a 95 Faugeres in a 2011 vintage, yet they are scored the same. Why?

    Also since Parker has left the scene we have to put up with just about every wine being over sensationalised by James Suckling. The only parties he benefits are the wine merchants and vineyards. I get daily emails from various merchants quoting his score and phrases like “best ever” and “perfect wine”. He scores nothing sub 90.

    Tim James | 27 September 2016

    Having mentioned the World of Fine Wine in the article, I should point out (as I’ve just realised) that they recently changed to a 100-point system. As with their 20-point system, they use it much more broadly than, say, Christian seems to – more like Fridjhon, but even meaner, so that the 70s equate to 7-12 in the 20-point system, rather than as bronze medallists. But they do what Christian says needs to be done at the top end (again perhaps more radically than he does), giving much more room for nuance with high scores: 90 = 17/20, so that the top 10 points (out of 100) cover the top 3 points (out of 20). It makes sense (if you accept that scoring itself makes much sense). So the real problem with the 100 point system remains that is not one system but a whole lot of systems and you have to know the taster’s calibration before a score makes sense.

    Kwispedoor | 26 September 2016

    If you have to score, the only reason to use the 100-point system – that I can see – is to conform to the way that scoring has been Parkerised globally.

    Ideally, it would make more sense to use all 20 or 100 points, i.e. both 10/20 and 50/100 would indicate an average wine.

    Then, I would also think it would be better to score wines within a range, simply because of bottle variations, serving temperatures, the time of day, the mood of the taster, blind versus sighted tastings, relation to other wines tasted, and a host of other variable tasting (and taster) conditions.

    For instance, one might then rate a wine between 15 and 16 (or between 75 and 80, where the 100-point system would indeed offer more gradation), using the full scale of available points. That might arguably be more fair to the wines, carry a bit more credibility and perhaps even curb the obsession with 100-point wines.

    We all know that this will never happen, though.

      Christian | 26 September 2016

      Hi Kwispedoor, I think the whole SA wine industry needs to start using the 100-point system because it has become international standard practice – if we want to be relevant in a global sense, then we need to adopt the conventions that apply. As for your suggestion of scoring wines within a range, I couldn’t agree more that wine rating is not a perfect science but I do like the idea of a “snapshot in time” – Atkin gave Mother Rock Syrah 2015 88/100, for instance, whereas I gave it 95/100 and now it’s up to you and the rest of the punters to decide who got it more right than wrong. What I appreciate about the 100-point system is that it gives a taster some wriggle room – for me an 86- and an 89-point wine are both roughly equivalent to 4 Stars but the one is a “weak” 4 Stars and the other a “strong” 4 Stars – gradations aren’t purely mathematical.

        Bachus | 27 September 2016

        “I couldn’t agree more that wine rating is not a perfect science”
        -Its actually not a science at all. Science relies on hypotheses which can be tested and have repeatable results, we all know that is definitely not the case with wine tasting and ranking.

          Christian | 27 September 2016

          Hi Bachus, Fair comment. I would argue, however, that expecting wine competitions to have repeatable results everywhere and always is wrongheaded. Tasters are required to make aesthetic judgements, which past a certain point are philosophical rather than scientific, and therefore need to be valid rather than repeatable.

        Bachus | 27 September 2016

        “Tasters are required to make aesthetic judgements, which past a certain point are philosophical rather than scientific, and therefore need to be valid rather than repeatable”.

        I was hoping you were going to say that. What then, is the point of assigning a subjective experience at a specific point in time by a single individual a numerical value? It implies certainty where there can be none.

    Smirrie | 26 September 2016

    Great article Tim

    No don’t abolish scoring systems it would take the fun out of blind tastings.

    We mere mortals like to test our palate against the so called experts.

Leave a Reply to Christian Cancel reply

Your email address will not be published. Required fields are marked *

Like our content?

Show your support.


Subscribe