Professor Charles Spence says blind tests make for the purest judgements. So why not in cocktails?
How should a cocktail maker’s skills be judged? This is a crucial question given the growing importance that awards and competitions have taken on in recent years in the world of bartending.
If done sighted, as is so often the case in cocktail competitions, there is a very real danger that the reputation of the individual, or the bar from which they come, may sway the jury’s verdict unfairly. Of course, we all think that we can discount such extrinsic information and instead focus solely on the intrinsic quality of the thing being judged – the cocktail in this case. However, the evidence from a wide range of domains, from the world of fine wine through the judging of musicians or the quality of musical instruments, for instance, reveals that we simply cannot.
In fact, so many of the distinctions that appear obvious when we know who, or what, we are rating, seem to evaporate under conditions of blind evaluation. Isn’t this why, after all, spirits in competitions such as the International Spirits Challenge and Bartenders’ Brand Awards, are principally rated blind?
Take, for example, evidence from the world of champagne. A few years ago here in Oxford we tested a number of experts on eight sparkling wines, varying in their percentage of white grapes (from 100% Chardonnay white grape, called blanc de blanc, through to 100% Pinot Noir red grapes, a so-called blanc de noir).
Despite stressing the importance of the proportion of white grapes to the art of champagne-making, our experts (several of whom who had written popular books on the topic) were unable to determine the percentages when testing blind. It is not that the different wines tasted the same to our panellists, experts or otherwise (ie, social drinkers) – they did not. It is just that our preconceptions bias our perception and judgements, no matter what kind of expert we might happen to be.
Exactly the same thing has been found in the world’s best musical instruments. There, the evidence suggests that the experts are unable to pick out the sound of a Stradivarius when heard blind against contemporary violins, despite being convinced of the difference in the quality of the sound. Similarly, the evidence shows that many more female musicians tend to get hired in orchestras as soon as the evaluations are conducted blind.
Such results might be taken by some to add weight to the suggestion that we should really be rating cocktail makers – like wines, musicians, and musical instruments – blind rather than sighted. That way, surely the judges could really focus on what matters: the look, taste, feel, and innovativeness of the drink itself, not forgetting, of course, the washline (this my menial task when acting as a judge at a recent international cocktail competition).
That said, insisting on blind tasting doesn’t itself, of course, necessarily always guarantee a fair outcome, as the recent scandal surrounding the Master Sommelier exam made only too clear. Twenty-three new certificates were invalidated after it was discovered a member of the Court of Master Sommeliers had “disclosed confidential information pertinent to the tasting portion of the 2018 Master Sommelier Diploma Examination prior to the examination”.
The full encounter
However, it can be argued that the skill of the consummate bartender cannot (and more importantly should not) simply be reduced down to just the results of a blind taste test. In the world of cocktails, after all, isn’t part of what we want to reward the interaction, performance, and the storytelling? It is all an integral part of most cocktail experiences is it not (and part of why we don’t want robot cocktail makers)?
Ultimately, the experience is about the whole encounter, with the drink itself being just one (albeit important) part of that total experience.
So, what to recommend? How best to assess the quality of the bartender and their potentially award-winning cocktail creation? One solution here, adapted from the field of sensory science, would be to first have one group of judges rate their expectations, based on the performance and the making of the drink (without anyone tasting anything). Next, the drinks themselves are rated blind (perhaps by a separate group of adjudicators, to ensure impartiality). And finally, a combined judgement is made of the total product offering. Then, the various ratings are combined into a single overall score, the hope being that the various ratings concur. But, even if they do not, at least by adopting such a strategy one knows on what basis the cocktail award is primarily being made.
Ultimately, the decision is an important one, given that those drinks made by an award-winning bartender are likely to taste better to whoever knows of the award. The irony here, at least in the world of fine wine, being that while people (and that includes the experts) seem not to be able to pick out the most expensive or prestige wines in a blind taste test, as soon as they know what wine they are drinking, it really does start to taste significantly better to them.
And given that out there in the real world we never really do drink our cocktails blind, then surely that is the kind of (sighted) cocktail experience that we should really be trying to assess, biased though the process may inevitably be.