It’s always useful to rehash old debates, if only because the audience is always widening, so I couldn’t resist last week responding to Michael Fridjhon’s claims about blind and sighted tastings (see here). Of course, when that particular debate happens, the almost inevitable conclusion from honest participants is that both methods have their advantages and disadvantages and, depending on circumstances, both can be useful.
Pondering the questions, though, I recalled a report on a tasting whose implications, if they were taken seriously, should be in one way or another devastating for the perceived usefulness of just about all tastings – perhaps even of all recommendations!
The tasting in question was a small one, organised in 2010 by Francois Mauss, president of a frequently-convened panel of tasters rather grandiloquently called the Grand Jury Européen (I have some doubts as to whether it still exists). His panel was of 14 highly experienced tasters, and he gave them six red wines to rate. It wasn’t reported whether there was unanimity in the rankings, but clearly there was enough agreement to allow for a clear ranking: aggregated scores ranged from 91 down to 87.
Disconcertingly, the wines were all Léoville Poyferré 2001. Not a single taster had suggested any two, let alone all of them, were identical. (Let’s hope they all identified all the wine as Bordeaux, at least.)
Were they all the same wine, though? Mauss had organised the tasting precisely because four of the bottles had been exported to different places around the world (Hong Kong, Switzerland, Germany, USA), where they’d been stored, and then been taken to Paris for the tasting. He wanted to suggest the importance of individual bottles evolving differently. The article reporting on the tasting (in World of Fine Wine No. 31) didn’t quote Mauss, but presumably he felt vindicated. Whether he was tempted to feel disappointed by his tasters is also unknown; certainly the author of the article didn’t seem to be.
Perhaps we might just accept that the four widely-travelled bottles had turned out so different (with none identified as faulty). But what of the other two, which ranked second and fourth in the list, one point apart? In fact, they both came directly from the château. The potentially dismay of this circumstance was evaded by some fancy-footwork talk about the bottling process and, of course, the use of corks. Unfortunately, I think, the tasters were not given samples from the same bottle – I rather wonder if they would have identified those as the same.
The article didn’t raise consider whether the problem was somewhat less in bottle variation than in the process, and in the capacity of highly experienced tasters to do this sort of thing adequately, even with such a small number of wines. It did point out, as a relevant factor, that “the probable expectation of the tasters that the wines would be different may have led them to ‘find’ differences that, objectively, were minimal”.
Let’s leave aside my own jaundiced conclusion about this experience as a neat illustration of the problem with quick tastings. We should, though, take a lesson from the psychological “expectation” factor: that is, that it’s not only sighted tastings that can problematically involve taster prejudices. So do blind tastings – whether it’s a panel in London knowing that they’re tasting South African reds and are therefore going to find burnt rubber characters, or a panel in Cape Town tasting Cape chenins and predisposed to find rampant excellence.
The devastating question that must logically arise from M. Mauss’s tasting, though, if we accept its validity, is surely this: What is the point of any of these competitive (or other) tastings, if the results they achieve are only for the actual bottles that were tasted? If a wine that scores 91 in the competition might score only 87 if it were tasted somewhere when tasted somewhere else in the world (or simply because of bottle variation), or vice versa, what is the point of the tasting? OK, it’s especially relevant to older wines, but still.
Fortunately for those who profit from competitions, few draw logical conclusions about this or about any of the other evidence that time and again shows the unreliability and inconsistency of big-tasting results.