I was struck recently that over my twelve plus years writing about wines that I’ve only given out four perfect scores to wines, and perhaps more strikingly, that I’ve given two such scores in the past two months. This got me wondering: What is it that gives me the inclination -- or even the right -- to drop such an “honor” on a bottle, and why does it happen so infrequently? And beyond that, why should anyone care?
Let’s start with the inclination. If you’re a wine lover like I am, you likely give every wine that you come across a rating of some kind, whether it’s a quick and casual thumbs up or thumbs down, an expressed preference for one bottle over another at a dinner party, or a thoughtfully jotted note for future reference that might say something like “buy” or ”buy and hold” or the acronym DNPIM (do not put in mouth), or a simple label snapshot on your phone to remind you to get a few bottles. It’s just something that we do, as we find that it both enhances the experience in the moment and expands our capacities for perception and preference.
Of course, dropping a number on a wine goes a step further that just an indication of preference. There are myriad scoring systems out there, some purporting to be scientific, some whimsical and everything in between…and you’ve likely seen several. Whether it’s a point scale, a “puff” scale, a smile scale or something else, the hope of the assigner is to communicate something about the quality of the wine in addition to a personal preference. If there’s a common standard in the industry (a debate for another day), it’s likely what’s known as the UC Davis 20 point scale, which is broken down is such a way as to encourage analysis of grade-able facets of the wine in question: Its color, aroma, acidity, etc.
Whereas the Davis 20 is very useful for winemakers when assessing wines mid process, it seems to leave the public wanting. Additionally, it doesn’t transfer well mathematically to the 100-point scale. A perfect score on the Davis 20 speaks to a solid, technically perfect wine, and while it might assure four puffs, or the maximum number of smiles, it isn’t necessarily likely to get the three glasses from Gambero Rosso or a perfect 100-point score from a critic.
Confused yet? Well, sorry…it gets worse.
I’ve heard the 100-point system described in many different ways – as a grade system that we’re all familiar with from our school days, as an applause meter, etc., and it seems to me that it’s mostly market pressure that demands the scale. It has become entrenched in the public mind to the degree that more attention is paid to the two or three digits than to thoughtful words about the wine, leading sellers to demand it as much as consumers do. So, as a critic, it’s incumbent on me to figure out a way to use this non-scientific scale to quickly communicate the “drink-a-licious-ness” of a wine to consumers at large while maintaining some sort of standard that shows some consistency across gamut of wines that I taste.
Does personal preference criteria play a role here? Of course it does, and if you’re a consumer, I caution you to look askance at anyone who tells you otherwise. While there are some wines that have multiple perfect scores from different sources, you’ll often find that the same wine will garner a range of scores across a selection of critics. Because of the subjective nature of the scoring scale, personal preference is inherent in it, and it can’t be entirely eliminated – nor should it. Consumers who look to particular critics develop relationships with them without ever meeting in person, and may give more weight to one critic’s score over another based on that relationship. For example, while I might award a wine a 97-point score, my reasons for doing so may differ from another critic’s reasons for their 97-point score for the same wine.
Savvy consumers know this and may lean toward one review over the other depending on their perception of one critic’s take on the wine. That said, of course the best critics aim for consistency of scoring within their own list of wines reviewed over time, becoming more consistent with time and experience.
But to circle back to basics, what gives me a right to “pass judgement” on wines at all, much less to assess a wine as perfect? The consumer is certainly perfectly able to taste through a set of wines and pass his or her own judgement upon the set, ranking them in order of preference, etc. and to decide that, for them, a certain wine may be perfect in every way at that moment. So, how is my assessment different, or more valuable, than theirs?
I’d suggest that the major difference is that most consumers don’t get the amount of focused practice that critics get tasting wines day in and day out. It’s not unlike the practice of law or medicine in that practitioners certainly don’t know all that there is to know about their area of expertise, but they are carefully working to expand their knowledge base through repetition, experience and other relevant factors to gain a more complete understanding that becomes useful to others. When you have a medical or legal problem, you’re probably not looking to your mechanic for advice. In the same way, looking to a critic for wine advice rightly acknowledges, and expects, a convergence of experience, practice and communication skill that sets that advice at a higher value level than a crowd-sourced score brings to bear. I tend to think of myself as an enthusiastic consumer with good communication skills who systematically practices as often as possible—with a desire to share what I’ve discovered along the way.
Finally, the question of frequency – why so few perfect scores over the course of my wine journey? When I started tasting wine seriously in the mid to late 1980’s, it wasn’t uncommon that I’d taste a flight of wines in which nearly half would have some sort of technical or balance shortcoming that was easily noticed. Today, the thirty years or so of improvements in both vineyard management and winemaking arts and sciences have made mitigation of those issues possible, and it’s exceedingly rare that I taste a bottle with serious technical issues, so it makes sense that more high scores would be being issued in general from all critics. At the same time, our “sense of the superb” is something of a moving target, and like anything that we learn about where improvement is constant, our idea of perfection continues to evolve.
Should you care about any of this? The choice to care is yours, of course, but if any critic takes the time to write about a wine, odds are that they believe that the wine has merit and is worthy of your attention as well, and the idea of a perfect score attached to a review by a critic with whom you’ve developed a relationship should give you confidence that a substantial of your wine budget would be wisely spent on such a bottle.
In closing, I'll tack on reviews from the WRO archives of the four wines that move me to accord them perfect scores:
Acumen PEAK, Atlas Peak, Napa Valley (California) Cabernet Sauvignon Edcora Vineyard 2015 ($150): A
very impressive wine that hearkens back to the early days of the Napa
Valley, when age-worthy structure and balance were king. Intense berry
fruit aromas are joined by notes of fig, baker’s chocolate and savory
hints of meat and dried herbs. It’s all there on the palate, with a
rich earthy character in front at present that will feather itself into
the mix with further aging. Perfectly balanced, nuanced and thought
provoking, and a definite cellar trophy at what’s become a bargain price
for such a wine. This isn’t a score I throw out very often, but
there’s so much to offer here -– I’m all in. Contains 6% Petit Verdot
and 4% Cabernet Franc. 100
Nickel & Nickel, Oakville (Napa Valley, California) Cabernet Sauvignon Martin Stelling Vineyard 2014 ($165): Yep, that just happened! I've been waiting for a wine like this one to show itself out of all of the great wines that I've so far had the pleasure to taste. I've thrown some 99's out there in the past, but never a perfect score. This is the complete package when it comes to Napa Valley 100% Cabernet -- blackberry, cassis, rich oak spice that doesn't intrude on the rest of the nuance and depth contained within, like the faint dried herbs and seductive, mouth coating structure that finishes with a slow, sustained rise in intensity. Cheers to Darice Spinelli and her team -- it doesn't get any better than this. Unless of course you factor in that it will improve over the long haul. 100
Ridge, Santa Cruz Mountains (California) Monte Bello Vineyard "Monte Bello" 2016 ($225): I tasted this wine twice -- once as a stand-alone, and once in a blind flight where, unbeknownst to me, it was alongside another 2016 wine I recently gave a perfect score. While I was inclined to give this the big number initially, the blind flight served to confirm it. I scored both wines equally when tasting blind, and I’d give a personal preferential nod to the Monte Bello. It’s so quintessentially California in every way, in what you might call the old school style that only a handful of producers continue to embrace. There’s plenty of fruit, but the balance of spice and herb tones that join in make for a wine of depth, energy and finesse that few can match. It’ll likely outlive me – and I’m not THAT old. Classic in every sense! Contains 72% Cabernet Sauvignon, 12% Merlot, 10% Petite Verdot and 6% Cabernet Franc. 13.7% ABV. 100
Spottswoode, St. Helena, Napa Valley (California) Cabernet Sauvignon Family Estate Grown 2016 ($225):
Pulsing with energy and tension between the myriad elements within, this is a wine for the ages – the best I’ve tasted from Spottswoode, and that’s saying something! Winemaker Aron Weinkauf’s deep understanding of the vineyard is realized here, with black and blue fruit, cedar spice box and easy savory notes. The warm 2016 vintage shows zero adverse effect here, with the winery’s typical finessed aromatics translated beautifully and extending into the distance. This is an absolute benchmark of the appellation. Bravissimo! Contains 9% Cabernet Franc and 6% Petit Verdot. 100