The National Consensus Ranking of Every Full-Length Wes Anderson Film
This definitive response to one of the most popular open questions of the digital age uses curatorial journalism to devise what may be the most accurate assessment of a beloved American director ever.
Note: The first four sections of this long Retro report are provided for free. To access the full report—which includes, in addition to an aggregate, forty-source ranking of all Anderson’s full-length works, links to and a ranking of almost all his short films (including exceedingly rare ones many readers won’t have seen before)—sign up for a free 7-day trial of Retro below.
Introduction
This is an essay I had to write.
It’s not just that I like to rank things and that Wes Anderson is my favorite director and that his films are some of my favorite films, though all those things are true.
It’s not just that I’m a legacy movie reviewer at Rotten Tomatoes who has reviewed movies professionally for several publications and now review television and film professionally here at Retro, though as is self-evident from this report that’s true also.
It’s not even that Anderson is a good director to write about because he has a unique poetics, his films are among the most beloved of the last thirty years, and he ranks as one of the few auteurs across scores of artistic genres and subgenres who laypeople have broadly heard of because his films are fun as well as visionary—though I think that would be an eminently fair assessment.
Beyond all that, it’s the simple fact that people love to rank Wes Anderson movies.
No—it’s more than that. People are obsessed with ranking Wes Anderson movies.
Which means that if you’re going to write about television and film in this decade you are at some point going to have to wrestle with the perpetual debate over which of the director’s often strange, sometimes twee, always wildly idiosyncratic films are the best.
Ranking Wes Anderson films has somehow become a cottage industry in itself, a national preoccupation I don’t see matched in discussions of almost anything else in the recent history of television and film—though the ongoing obsession with Joss Whedon’s Buffy the Vampire Slayer (and fan fiction pertaining thereto) certainly comes close, and indeed might have surpassed the fixation with ranking Anderson’s artistic output if Whedon hadn’t gotten himself into some serious trouble in recent years.
So will readers of the comprehensive, forty-expert-source aggregate ranking below now be able to divide Anderson films into discrete tiers, using the data in this report?
Certainly, though it’s worth noting that one of the fascinating discoveries one makes in ranking Wes Anderson films via curatorial journalism is how variable individual critics’ rankings are. Sure, there are some trendlines, but fewer than you might expect.
Why Rank Anything?
Between 2008 and 2013, I was responsible for the national rankings of graduate programs in the academic discipline that was then—statistically speaking—the fastest-growing field in the world: Creative Writing.
The history of Creative Writing as an academic discipline in the West (which includes all the controversies about deeming it an academic discipline in the first instance) was not just the subject of my doctoral dissertation at University of Wisconsin-Madison but a book on the subject I authored that was published by Bloomsbury in 2018. I was one of the leading experts in this area during the 2010s, per those who had in the past worn that particular mantle, such as well-respected researchers Tom Kealey (link) and D.G. Myers (link), the former of whom I ultimately co-authored yet another book with (link).
Ranking Master of Fine Arts programs in Creative Writing for many years as a paid researcher for Poets & Writers—the leading trade publication in the field—was a wildly controversial endeavor, largely because, instead of doing what U.S. News & World Report had done back when it still ranked such programs (sending program faculty a one-question survey about which programs they most admired), I created a twenty-category assessment methodology that for the first time ever gave a voice to aspiring writers rather than the clubby, aloof, self-interested, self-aggrandizing faculty class.
Under the Poets & Writers rankings, the concerns of those who were actually planning on going into debt to get an MFA—student loan debt, cost of living, student-faculty ratio, job placement data, and so on—were given page-space for the first time. As you might imagine, the institutions that had been getting rich off bilking young people for a non-professional graduate degree (meaning a degree that doesn’t assure you of being sufficiently qualified for any job) were extremely unhappy with me. Instead of making coherent critiques of the rubric or the values that undergirded it, however; instead of proposing alternatives; instead of offering up internal data to make the rankings ever more and more accurate; most of the rankings’ biggest detractors simply lined up to attack the very notion of ranking.
Ranking anything.
This—the idea that rankings are always bad—has been a popular rallying cry in many fields, not just Creative Writing. And it would make sense, too, if rankings were ever devised to determine what is “best” in some absolute sense. But of course responsible rankings tell you what’s being assessed in explicit terms, and therefore communicate to you also what is not being ranked. By way of example, these days most rankings of the best baseball players in history will tell you they’re using the new “WAR” statistic (Wins Above Replacement) to measure player quality; if you happen to find that new statistic convincing, great, and if not, you can ignore that ranking. Just so, when we rank films or other artworks, we’re merely gauging the popular opinion of the moment—not claiming that art can be objectively assessed (a red herring often pulled out by critics of rankings). Just so, both U.S. News and Poets & Writers were perfectly clear on what they were or were not assessing; the rankings I put together came with a fifty-page methodology article, and as for U.S. News, well, it had its “questionnaire” (though I’m not sure a one-question document can be called a questionnaire), which faculty critics of the Poets & Writers rankings not surprisingly had little negative to say about because it polled them and did absolutely nothing else. This is another way of saying, I suppose, that rankings are always political in some way; that opposition to a ranking is always political in some way; and that anyone who claims to have a “pure” motive in attacking a given ranking is probably full of it in some way.
What is certain is this: for all that rankings can be done poorly—and often are—the idea itself has obvious merit. Simply put, rankings can be a force for enormous good.
The utilities of rankings are many, and are often overlooked. They’re helpful for those new to an area of knowledge who are looking for expert guidance. They launch new conversations about the subject in question that can be illuminating for all concerned. They act as a rite of celebration that honors and raises the profile of the subject or the persons or the institutions being ranked. They produce meta-commentary about how we rank not just one particular thing but anything, which is often the very best way to talk plainly about what we value and why, what we want out of life and why, who we are and why we think it’s important for us to be who we are. The existence of rankings makes us feel less in the dark and alone—more seen—because the simple fact is that rankings are a type of narrative, humans are obsessed with narrative, and when and as we create new narratives we are participating in an ancient endeavor that all humans can in some way relate to whether or not we’re much interested in the subject at hand.
Rankings can also have subtle, esoteric benefits we never speak about. Imagine that you’re not flush with cash and need to stretch your entertainment dollar as far as you possibly can; could it help you, in deciding which two or three Wes Anderson films you can afford to rent from a streaming service, to know which ones—normatively speaking, of course—you’re statistically most likely to find abiding pleasure in? What if you want to test the waters with Wes Anderson without investing too much of your valuable time? Does it help to know which films of his those who love his work deem his best, and therefore worthy exemplars for those dipping their toes in Wes’s waters? Well, course it does. What if you’re a student and you’ve decided to make an academic study of a single director? Could you make use of how others see Anderson’s oeuvre as both an entry-point for how you see it (the same or differently) and what types of value and meaning people appear to be searching for in Wes Anderson? Of course you could.
Are there positive downstream effects that attach to the publication of any ranking? Clearly so. If a ranking is ever so slightly more likely to save you time in finding out which Anderson films you favor—or even protect you from making the mistake of watching one random Anderson film, deciding you dislike it, and never watching him again—does it not also make you slightly more likely to integrate Anderson into your life and your own creative pursuits by (once you’ve come to appreciate Anderson) searching out more artists like him, or who were in turn influenced by him, or who’ve written art criticism that expands upon the clever discussions you’re already having in your head and heart about his themes, motifs, aesthetic, poetics, and so on, and how any or all of these relate to your own experience as a human? Of course it does. How many of us have brought something meaningful into our lives—a restaurant, a tourist spot, a music album, even a country we choose to visit on vacation—because we used a website like Yelp or TripAdvisor or read a listicle in some major-media publication?
Has a romance ever been formed by two people who discovered one another online via the fact that they uniquely among Anderson’s fan base love most his least admired film?
I haven’t a doubt about it in the world.
Whenever we fix a fact, however imperfectly or subjectively or transiently, we are also creating a location where minds can meet and intermix in surprising ways. If we never set about fixing facts, even temporarily, in this position or that one—that is, if we never venture to map anything—we hardly ever know how to find one another in ways cheap or profound, transient or eternal, wise or puerile. And since serendipitous meetings are another thing that make life worth living, it seems to me that anything that facilitates such discoveries has some innate value. The idea of “conversation-starters” being valuable may have long ago become cliché, but it’s cliché because there is an indisputable measure of truth to it.
There’s also special value(s) in curatorial rankings—curations of all existing rankings.
The biggest such value is, I admit, rather cheeky, inasmuch as meta-rankings render instantly obsolete all existing individual rankings from a statistical standpoint—if not, surely, a qualitative one (after all, we all have our favorite critics and will value their insights more than we do others’). In other words, if you generally dislike rankings or even the notion of ranking things you should love curatorial rankings because they tend to permanently settle questions that would otherwise be subjected to ever more and more microanalysis in perpetuity. It’s nice when a single act of journalism can place a capstone atop a running discussion whose usefulness may have come to an end.
Curatorial rankings also—maybe paradoxically—raise the profile of their sources by letting people know they exist (and indeed the ranking below links to all its sources).
And they work to iron out the many idiosyncratic, unstated biases that any one ranker might have while also honoring such idiosyncrasies by noting that they still deserve to be part of any aggregation of data. Curatorial rankings save readers time and energy and provide a useful macroanalytical lens on subjects in which we often get lost in the weeds.
All of the above said, of course it’s the case that all rankings are flawed; all those who make rankings, being human, are flawed; all rankings create a flattening effect in one way or another; all rankings wash away a certain amount of the proper celebration of idiosyncrasy that makes living a worthy pursuit; all rankings reduce what at least in some way must be ineffable to the drudgery of numbers; all rankings can mislead or ill serve those who either (a) do not know their methodology, or (b) have special aims or requirements or needs that no ranking intended for a general audience can address.
All rankings can be bastardized, coopted, or misread, and indeed this often happens—especially if there is in some direct or indirect way money involved in how the ranking shakes out, as is often the case.
What This Ranking Is Not—and Some Minor Observations
I could offer some art criticism here focusing on Wes Anderson’s metamodernism; his interplay of the cerebral and the emotional; and his endlessly analyzed and debated cinemaphotography—which involves symmetry, color, perspective, music, accent, tone, litotes, absurdism, and more in ways you can easily track throughout his films—but I think it better, as this is a work of curatorial journalism rather than proper art criticism, to make observations more in the vein of data analytics than critical theory.
In compiling this aggregate ranking, I discovered—among much else—the following:
A small number of critics appear to have semi-consciously rated Wes Anderson’s stop-motion animated films slightly lower than his live-action films as a matter of preference for the latter visual format. Only a fraction of these critics admit to their bias openly; another fraction blithely imply that Anderson’s two stop-motion films are intended for a special audience (children) when that is clearly not the case. All of Anderson’s works are intended for adults—it’s simply that some can also be enjoyed by children.
The most avidly disagreed-upon Wes Anderson films are quite clearly Asteroid City (which ranked as high as second on one critic’s list and as low as last on another); The Life Aquatic with Steve Zissou; Moonrise Kingdom; and—albeit to a lesser degree—the recent anthology film The French Dispatch, which seems to suffer from it being a series of vignettes in the same way Fantastic Mr. Fox and The Isle of Dogs “suffer” (among some critics, at least) for being stop-motion animated.
As one might expect, the reasons critics usually attach to giving a lower-than-expected rating to a Wes Anderson film are one of these three: (i) the film is too twee; (ii) the film is too obtuse; (iii) the film excessively echoes another work by Anderson.
Bottle Rocket is clearly near-universally seen as merely a “proto-Anderson” film—offering some elements that would later be critical to his visual aesthetic and tonal conceits but not really achieving these with any degree of surehandedness.
Some may wonder, of the ranking below, how it differs from simply ordering the Rotten Tomatoes or Metacritic scores of all of Anderson’s films. Those who ask this have likely never been a film critic—as no critic would want their analyses bastardized in this way (meaning that none of the critics who wrote the reviews Rotten Tomatoes and Metacritic use to create their scores on would want those scores, which the critics had nothing to do with whatsoever, to then be used for some aggregate assessment).
Both Rotten Tomatoes—a binary system in which a given review is said to either find a film “fresh” or “rotten”—and Metacritic (which estimates, from basically nowhere, the number grade it imagines a critic would give a film based on the language the critic has used to describe the film) are entirely dissimilar to a single critic having the chance to, in a single sitting, order the works of a single director by their preferences.
And it’s not just that Rotten Tomatoes and Metacritic in a sense don’t let critics speak for themselves, but rather quantify a critic’s words in a way the critic might object to.
Beyond this, a single critic generating a single ranking of Anderson films gets to do so all at once, whereas the reviews a single critic has given each Anderson film over a period of thirty years might track only uneasily with one another because the critic has matured over time. So simply using Rotten Tomatoes or Metacritic to aggregate data is guaranteed to commingle apples and oranges and (as noted) quite a lot of guesswork put into the mix by those two websites. By comparison, a curatorial ranking like this one honors critics by letting them speak clearly, and in a single review, for themselves.
Methodology
Only 2023 reviews were considered here, as only these reviews include all current Wes Anderson films (of which there are eleven in the “full-length” category).
Short films are considered in a separate category, though their average appearance position in rankings that include them allows us to extrapolate how they’re generally perceived against Anderson’s full-length features (even if the number of critics placing them within the entirety of Anderson’s oeuvre in this way is necessarily far less than the total number of critics whose assessments are catalogued here). Simply put, these short films are seen as lesser works compared to their full-length peers, and this assessment appears to be both categorical and universal. It’s for this reason short films get their own aggregate ranking below.
The ranking system used in this curation is simple: only expert rankings with eleven films ranked are used; the number of points a given film takes from a given ranking is equal to that film’s placement in the ranking (i.e., 1, 2, 3, 4, and so on); and the lower the total number of points a film ends up with, the higher it is in the aggregate ranking below.
This system also allows us to pinpoint with precision the “average ranking” (among eleven films) for each film, as we simply take the number of points a film has and divide it by the number of rankings curated for the overall ranking. This same system, admittedly at a slightly smaller “n” scale—where “n” is the number of available data-sets to work from—allows us to determine the “average ranking” of Anderson’s short films when set within works of a similar duration.
It’s worth noting that the dozens of film critics whose reviews were curated for this ranking almost universally said that they like all of Wes Anderson’s films—even those they ranked toward the bottom of their respective lists.
It’s also useful to comment briefly on something readers will immediately see when they look at the rankings below: that Wes Anderson’s oeuvre very clearly breaks down, in the view of critics, into three distinct tiers of quality with one film—perhaps not coincidentally, Anderson’s latest—dividing opinions sufficiently to force Retro to put it in a tier of its own.
Note: In the interest of completeness, Retro offers an additional benefit in this report not identified in its title. Beyond a preliminary ranking of all Wes Anderson’s short films, you can also directly click over to those films—in their entirety—in almost all cases; links are provided within the ranking itself (see the second ranking, of short films, below). I’m excited about this because I suspect that even many readers who love Wes Anderson have never seen all his films.
The National Consensus Ranking of Wes Anderson Films
{with ranking, film title, release date, points, and ranking average for each film; a link to the official trailer for the film, or the closest approximation available, can be found by clicking on the link at the release date}