Michael Fridjhon: Do consumers need wine experts?

By , 27 June 2018



It would take someone as ballsy or as deranged as the WineMag editor to pose the question that lurks everywhere that wine quality is rated and discussed: how important is wine expertise? Vivino aggregates the scores of thousands of consumers, and is regarded by many in much the same light as tablets of stone carried down from a mountain in the Sinai desert. There’s no “expertise” there, just near-universal agreement.

Among the many surprisingly elitist teachers at the government school I attended (annual school fees less than R40 per year) there were several who argued that an opinion was not the same thing as a judgement. Everyone was entitled to his/her opinion, whereas only the well-informed and judicious were entitled to expect that someone would take their judgement seriously. That said, experts aren’t infallible: Hugh Trevor-Roper, Regius Professor of Modern History at Oxford University, famously fell for the fake Hitler diaries.

If experts are sometimes unreliable and you are not in a position to distinguish their sound judgements from their dubious ones, aren’t you better off with an aggregated average, the sum of all the right answers, and all the wrong ones? This fits perfectly with the efficient market hypothesis which argues that you will never, in the long term, beat the market index. For all the effort that goes into persuading investors that the expertise which goes into stock-picking is their best bet, it’s now widely accepted that simply following the market average will ultimately yield a better result. There is of course some truth to this position – there’s safety in numbers. However, it must be said that when you get it wrong, you get it very wrong. Just look to the Germany of Mr Hitler, where clearly the majority of Germans in the mid-1930s backed the wrong horse, morally and pragmatically.

So is there a way of making practical use of expertise? Some believe you should identify the critic whose views most closely measure up to your own, and go with him/her. There will be differences, but at least the arrangement works like a happy marriage: the big wins make up for the small disappointments. If you aim to do this properly however, you need to have a more objective sense of what it is you seek from a wine, and then to test this against your critic of choice. Do you seek elegance and finesse over power and palate weight? What is your tolerance of new oak in young wine? Is opulence more important than nuance, primary fruit more attractive than intricacy and complexity?

When it comes to wine, my own aesthetic universe is a matter of public record. At the annual Wine Judging Academy I teach the mnemonic PAPER CLIPS – where each of the letters stands for an important and positive attributes in my idea of fine wine:

P = Purity
A = Aesthetic Integrity
P = Potential
E = Equilibrium
R = Refinement

C = Complexity
L = Luminosity
I = Intricacy
P = Persistence
S = Savouriness

This is the same aesthetic that I communicate to all the judges before the first day of the Old Mutual Trophy Wine Show, and which largely guides the major decisions taken at the competition. (It should be added that exceptional wines which don’t fit into this framework are never excluded, as long as as they are in balance, and still have life in them: no dead fruit, no slippery tannins). Does it work? That depends on your take on the results – but the extraordinary incidence of the same wineries on the winners’ podium in successive years shows that skilled judges are able to discern what we consider desirable qualities, and to reward them with remarkable precision.

This raises the question of any form of score aggregation: is it better to go with a single critic, or run with a panel? If there’s a single critic out there that you trust (and in South Africa, tasting blind, I think I’m the last one left, with my scores all available on Wine Wizard) and very few everywhere else, this is surely the gold standard. Next best is a panel composed always of the same members – such as the WineMag one. Here you get something quite close to an individual viewpoint. The same value might apply in a show judging environment – if there’s a strong direction, and a common gatekeeper, such as at the Old Mutual. The more that individuality is diluted, the closer you find yourself approaching the average – and once you’re down to the big numbers of Vivino it’s a much better idea to trust your judgement, and to be ready to learn from your own mistakes. Do this for long enough, you won’t need outside expertise, and you’ll be your own best touchstone of vinous reality.

  • Michael Fridjhon has over thirty-five years’ experience in the liquor industry. He is founder of Winewizard.co.za and holds various positions including: Visiting Professor of Wine Business at the University of Cape Town; founder and director of WineX – the largest consumer wine show in the Southern Hemisphere and chairman of The Old Mutual Trophy Wine Show.


4 comment(s)

Please read our Comments Policy here.

    Le Penseur | 28 June 2018

    The “expert” on experts is Philip Tetlock, currently the Annenberg University Professor at the University of Pennsylvania. His 2005 book “Expert political judgment: How good is it? How can we know?” Princeton University Press, is based on two decades of tracking some 82,000 predictions by 284 experts. The experts’ forecasts were tracked both on the subjects of their specialties and on subjects that they knew little about. The result? The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board, according to Tetlock.

    “It made virtually no difference whether participants had doctorates, whether they were economists, political scientists, journalists or historians, whether they had policy experience or access to classified information, or whether they had logged many or few years of experience.

    As a society, we put an enormous amount of trust into the advice and insight of so-called experts, and yet it seems to me that the word itself has been utterly stripped of its authenticity. What does it really mean to be an expert anymore? Indeed, you can still find so-called scientific experts trying to refute climate change. Perhaps this is what led to the explosion of crowdsourcing information and ideas, for if the experts keep getting it wrong, the rest of us together can probably get it right.

    What worries me the most is that we have a natural inclination to trust those who are branded as experts, even before they open their mouths, and this is a dangerous tendency in a world where experts have the accuracy of a chimp playing darts.”

      Kwispedoor | 28 June 2018

      Hi, Le Penseur.

      “…a world where experts have the accuracy of a chimp playing darts.” Am I correct if I understand that this particular accuracy refers to experts’ political predictions? Because if it does, it simply means that us humans are generally not that great at predicting the future, whether you’re an expert or not. I take the point, but that would then only be of the tiniest significance regarding this specific subject matter (judgement, as opposed to prediction).

      Experience and abilities as a taster counts for a whole lot. I do think it’s more difficult to taste wine blind than most people think, even for experts, and that it becomes exponentially more difficult once you go over a certain quantity.

        Le Penseur | 2 July 2018

        Quantifying the unquantifiable
        A facet regarding the judgment of wine concern prediction, does it not?

        The point(s) Tetlock made does not only concern politics in the narrow sense either. Politics are everywhere. Among the broadest ways of defining politics is to understand it as a ‘social activity’ – an activity we engage in together with others, or one through which we engage others. Politics, in this sense, is ‘always a dialogue, and never a monologue’ (Heywood, 2013, p. 1). A similarly broad (or perhaps even broader) definition is offered by Arendt (2005), who argues that politics does not have an ‘essence’ – it does not have an intrinsic nature, or an indispensable element according to which we can definitively, and in all circumstances, identify something as political. Politics, rather, is the world that emerges between us – the world that emerges through our interactions with each other, or through the ways that our individual actions and perspectives are aggregated into collectivises.

        The political scientist Robert A. Dahl (1915–2014) defined power as influence over the actions of others, arguing that, “A has power over B to the extent that he can get B to do something that B would not otherwise do”. Which types of questions may be accorded as political: Who decides? Who or what has more influence? What is being decided? How are decisions being made? Why are such decisions being made? What are the consequences of a decision?

        It, thus, has to do with how one person’s behaviour [exercise of power] influences another’s. The influence Robert M Parker Jnr and the influence he has had on the judgment of wine and the influence he exercised upon winemakers come to mind does it not? These widespread changes in technique have been called “Parkerization”.

        Wine Politics by Tyler Colman exposes a little-known but extremely influential aspect of the wine business – the politics behind it. Colman systematically explains how politics affects what we can buy, how much it costs, how it tastes, what appears on labels, and more.

        Are wine experts reliable in their assessments? “Drinkers have long suspected it but now French researchers have finally proved it: wine “experts” know no more than the rest of us. Frederic Brochet, PhD, carried out two studies. In the first, he invited 54 of Bordeaux’s eminent wine experts to sample different bottles, including a white wine to which he had added a flavourless substance giving it a red colour. Not a single expert noticed. In the second test, 57 experts tasted the same average bottle of Bordeaux wine on two occasions. The first time it was labelled as a prestigious Grand Cru Classe, and the second time it was labelled as a cheap Vin de Table. When they thought it was a Grand Cru Classe, the experts described it as agreeable, woody, complex, balanced and rounded. When they thought it was a Vin de Table, they said it was weak, short, light, flat, faulty and with a sting.” – Adam Sage, The Times, London.

        An Analysis of the Concordance Among 13 U.S. Wine Competitions – Robert T. Hodgson* – “An analysis of over 4000 wines entered in 13 U.S. wine competitions shows little concordance among the venues in awarding Gold medals. Of the 2,440 wines entered in more than three competitions, 47 percent received Gold medals, but 84 percent of these same wines also received no award in another competition. Thus, many wines that are viewed as extraordinarily good at some competitions are viewed as below average at others. An analysis of the number of Gold medals received in multiple competitions indicates that the probability of winning a Gold medal at one competition is stochastically independent of the probability of receiving a Gold at another competition, indicating that winning a Gold medal is greatly influenced by chance alone.”

        How Expert are “Expert” Wine Judges? – Robert T. Hodgson – “Recent papers by Hodgson (2008) and Gawel and Godden (2008) have questioned the consistency of expert wine judges in a wine competition setting. In the latter paper, a methodology introduced in psychometric research to measure judge reliability corrected for chance was used to quantify judge consistency (Cohen, 1968). This paper extends that notion, suggesting a value of 0.7 for Cohen’s weighted kappa might be used to define an expert wine judge. With that criterion, less than 30% of judges who participated in either of the two studies would be considered “expert.” (JEL Classification: C1, L15)

        “The truth is that you cannot define taste objectively,” [objective being the emphasis] said Frederic Brochet, whose study won an award from the Amorim wine academy in France.

        “An accumulating body of research on judgment, decision making, and probability estimation has documented a substantial lack of ability of both experts and nonexperts. However, evidence shows that people have great confidence in their fallible judgment.” – Einhorn, H. J., & Hogarth, R. M.

        ”Expert judgments have been worse than those of the simplest statistical models in virtually all domains that have been studied” – Camerer and Johnson, 1991

        “In nearly every study of experts carried out within the judgment and decision-making approach, experience has been shown to be unrelated to the empirical accuracy of expert judgments” – Hammond, 1996

        Competition judges are often selected for their expertise, under the belief that a high level of performance expertise should enable accurate judgments of the competitors. Contrary to this assumption, we find evidence that expertise can reduce judgment accuracy. – Jeffrey S. Larson and Darron M. Billeter

        “Professionals are not stupid, but they are human” – Bazerman, 2006

        The issue then turns out to be the divergent, if not, unreliable ratings to be found between various competitions and wine judges, which in turn leads to confusion for the consumer and arguably to a credibility question. The validity and thus credibility depends on the repeatability of the rating(s) across various platforms and expert assessments, otherwise such assessment just becomes insignificant.

        I, personally, would like to believe assessments and rating of wine by “experts” are objective as can be but the rating variations I have seen recently do not instil any confidence in that notion.

        The issue at hand may just be that we, the consumer and wine expert(s), are trying to quantify the unquantifiable.

        Kwispedoor | 2 July 2018

        “The issue at hand may just be that we, the consumer and wine expert(s), are trying to quantify the unquantifiable.” – of course that’s absolutely true, Le Penseur, but nobody would seriously dispute that. It’s all in vain, but to make a living out of wine writing nowadays, it’s difficult to avoid scoring. And let’s face it, it creates much more opportunity for debate, so that part about it is cool. I’ve been saying for a long time that scoring within a five-point bracket is a more realistic and sensible compromise.

        You quote much studies (or rather mostly hoe the media has reported on it), but not the counter arguments to it. Of course it’s easy to trick people by manipulating colour – wine tasters use a wine’s colour to gain many clues and prompts when tasting. Colour is an intrinsic part of the wine. And pouring a wine into a different bottle? Come on, that’s a student’s trick.

        Do you think that, for instance, the Old Mutual Trophy Sow would have had the measure of consistency that it had over several years, using different experts, than if they just picked pundits from Makro’s liquor store isle? Expertees matter in wine, it’s just a very complex and grey area and one should take most opinions with a healthy dose of salt.

        Also think twice about the way that you selectively quote from articles that report on Frederic Brochet’s (or any other researcher’s) work. This is just one riposte (not my own, just a nameless internet dweller) to how this particular paper has been twisted into snide polularism:

        The experiment was designed to attempt to fool the subjects into misinterpreting the wine they were tasting. The entire purpose of the experiment was to demonstrate how a tasting could be manipulated to give misleading results.

        The “trick” wine was white wine colored as red, then served in a red wine glass and served at red wine temperatures. White wines are typically served at around 45F and reds at more like 65F, and yes, they taste completely different at these different temperatures.

        The conclusion wasn’t even that they couldn’t tell red from white – the conclusion was that when evaluating what they thought was red wine, they used lexicon associated with red wine to describe it. They evaluated it the way they thought red wine was supposed to be evaluated. The study is always cited as saying that “experts could not tell red wine from white wine” when that was not even a part of the study. The fella that ran this study has been very outspoken about this gross misinterpretation of his study (his name is Frederic Brochet, google it up).

        The subjects were not experts, they were undergraduate students in a wine program who were specifically selected because of their inexperience. Part of the purpose of the study was to evaluate whether their methods for evaluating wine were impacted by the vernacular of well-known tasters (and it was).

        Some further bullshit about the article, is that it wasn’t just an experiment run by “a scientist”, it was run by a wine expert who also had a PhD in psychology who wanted to make a point about how testing procedures were flawed.

Leave a Reply

Your email address will not be published. Required fields are marked *

Like our content?

Show your support.