[UPDATE: Click here to read an additional analysis on the accuracy of our predictive model for the 2018 and 2019 vintages].
Some of the top Bordeaux wines are usually released and sold during the En Primeur (from now on referred to simply as “EP”), which occurs between April and June following the last harvest.
Wine critics travel to Bordeaux, taste the young wines in barrels, give their scores and when the Châteaux set their release prices, the négociants de La Place de Bordeaux and wine merchants all over the world consider the purchasing opportunities offered by the vintage.
This year of course that could not happen. The EP was postponed until two weeks ago, when the first Châteaux started to release their wines on the market. Of course the lack of scores would have been a problem for the buyer, not being able to assess the quality of the vintage.
Many Châteaux resorted to send their wines directly to the main wine critics’ houses and few others have been able to taste the wine on the spot.
Saturnalia released a detailed Bordeaux harvest report months ago, declaring 2019 as an excellent vintage, set between 2018 and 2016. Furthermore, we were able to predict scores for a panel of about 50 Chateaux, based on satellite observations, weather data, and other type of data used by our proprietary AI modelling (fill the form at the end of the page to get 2019 Saturnalia scores for free).
Now that many of the most relevant wine critics have released their scores, it may be interesting to analyse how things have gone, and how our scores compare with them.
We analysed a panel of 104 Bordeaux 2019 wines which were recently rated at least by one of the following critics:
AG = Antonio Galloni (Vinous)
JA = Jane Anson (Decanter)
JD = Jeb Dunnuck (JebDunnuck.com)
JL = Jeff Leve (The Wine Cellar Insider)
JMQ = Jean-Marc Quarin (Quarin)
JS = James Suckling (James Suckling)
LPB = Lisa Perotti-Brown (Wine Advocate)
NM = Neal Martin (Vinous)
PM = Peter Moser (Falstaff)
Most of the critics rate EP wines in a +/- 1 bracket (for example 96-98). This, together with the different views and styles, plus probably some bottle variation due to the long travelling of unfinished young wines, create a wide difference between scores.
Critics may score the same wine so differently, that there can be a discrepancy up to 10 points between one another, as shown in the chart.
The variability distribution is shown here, with the highest frequency in the 4 to 5 points cluster. That means, for example, that the same wine can be frequently scored between 91 and 96 by different critics.
As mentioned earlier this is a known fact and this is why some wine critics emerge as more relevant than others (more consistent, more accurate, more reliable, etc.) and buyers are learning whom to follow based on preferences that often vary from market to market.
It is nevertheless quite extraordinary that the wine industry seems to be accepting such huge variations without considering them a problem for buyers, and above all, consumers.
Saturnalia takes a completely different approach, an approach that is data-based and driven, which discards all human influence, tasting ability, bottles variability, etc. It must be stressed that our aim is not to imply that wine tasting is not important or even that we want to replace human judgment with Artificial Intelligence (AI) models. We just propose this approach as a complementary one, that can be useful to navigate the uncertainty and make more informed decisions.
Of the 104 wines analysed, we took a subset which were the same wines for which we were able to calculate the scores based on our predictions. We selected 34 wines that were tasted by at least 3 or more of the main wine critics mentioned above.
All critics’ scores are displayed in the chart, along with their average score and Saturnalia score.
As shown in the following chart displaying our scores and the averages of the critics, in many cases Saturnalia predictions were accurate, and in the majority of the cases they were well within the variation of the tasting scores (error bars show standard deviation).
The trend of variation is followed and the difference between our scores and the average of the critics is between 0 and 4. In most instances Saturnalia took a more conservative look at the vintage, with our scores 2 points lower than the average (which has, as mentioned earlier, greater variability, with a median value of 4/5 points of difference among different tasting scores).
It can also be interesting to observe who are the wine critics whose scores are closer to those of Saturnalia and who are the ones that are more divergent.
Saturnalia scores for the same wines tasted by Antonio Galloni of Vinous show excellent agreement. Antonio Galloni has taken one of the most conservative approaches to the vintage, with an average score of 95 (for the wines shown in the chart), close to Saturnalia 94.1.
Another interesting comparison can be drawn from the French critic Jean-Marc Quarin Bordeaux-based scores, whose average of 95 is close to Saturnalia’s 94.2 (for the wines shown in the chart), and most of the scores compare quite well, with the exception of few wines such as Haut Bailly and Duhart Millon which were particularly penalised when compared to the rest of the critics and to ourselves (maybe a case of bottle variation?).
Neal Martin of Vinous is one of the most influential and respected wine critics, especially when it comes to Bordeaux wines, and we are quite happy with this score comparison. Again, we took a more conservative approach, but the trends are largely the same for most wines. Particularly interesting is the case of Petit Village, to which only Saturnalia and Neal Martin have assigned a score below 90. It will be interesting to see who is right later on when the wines are bottled.
Lisa Perotti Brown (Wine Advocate) is one of the more convinced champions of this vintage, with 97 average score (for the wines shown here), well above our 94.7. Again, there is a good agreement with the general trend, but we were certainly more cautious in our scores.
James Suckling has also taken a decisive approach to 2019, with a lot of high scores (including some potential 100 points) assigned. Again, in this case his 97 points average (for the wines shown) is way above our 93.8, although the general trend is once again in good agreement with ours.
Jane Anson of Decanter has also given high, Bordeaux-based scores in general, with 96 points average (for the wines shown), not too far from our 94.2, and with lots of wines with which we share perfect overlapping.
Fewer scores were found for Jeb Dunnuck, but for these wines we can observe a very good comparison with Saturnalia scores.
Decidedly higher scores than Saturnalia ones here with Jeff Leve, with an average of 97 versus our 94.2 and 96 of the whole panel.
Last but not least, Peter Moser’s (Falstaff) scores, with an average of 96 versus 94.2 of Saturnalia, show good correspondence with the total average and a good overlapping trend.
Fill this form to get all Saturnalia scores for 2019 vintage FOR FREE!