AFL Finals History 2000 to 2016 : How Does the 2017 Cohort Stack Up?
/Last year, the Western Bulldogs bucked recent Finals history by going on to win the Flag after finishing 7th in the regular home-and-away season.
Read MoreLast year, the Western Bulldogs bucked recent Finals history by going on to win the Flag after finishing 7th in the regular home-and-away season.
Read MoreThe 2017 season has been a close one, with any team a genuine chance of dominating and maybe even toppling any other team on a given day. More than once, a team near the foot of the competition ladder has defeated a team near the top, and we sit here at the end of Round 20 with the final 8 far from decided.
Read MoreIn the previous blog we looked at the reduction in vigorish that arose from adding a single bookmaker to the portfolio of options when wagering on the AFL. There we found significant reductions in average vigorish from this simple inclusion.
Read MoreThis year, MoS Funds have for the first time been wagering with other than the TAB. It seems obvious that adding a second bookmaker must be beneficial to a bettor unless that second bookmaker offers identical prices to the first. The question is: how beneficial?
Read MoreWe've looked at the topic of uncertainty of outcome and its effects on attendance at AFL games before, first in this piece from 2012 and then again in this piece from 2015.
In both of those write-ups, we used entropy, derived from the pre-game head-to-head probabilities, as our measure of the uncertainty in the outcome. In the first of them we found that fans prefer more uncertainty of outcome rather than less, and in the second that fans prefer the home team to be favourites, but not overwhelmingly so.
Today I want to revisit that topic, including home and away game attendance data from the period 2000 to the end of Round 13 of the 2017 season (sourced from the afltables site), and using as the uncertainty metric the Expected Margin - the Home team less the Away team score - according to the MoSHBODS Team Rating System. There's also been a suggestion recently that fans prefer higher-scoring games, so I'll also be including MoSHBODS' pre-game Expected Total data as well.
Let's begin by looking at the relationship between expected final margin (from the designated Home team's perspective) and attendance.
There are just over 3,000 games in this sample and the preliminary view from this analysis is that:
Those conclusions are broadly consistent with what we found in the earlier blogs (and with the more general "uncertainty of outcome" hypothesis by which name this topic goes in the academic literature).
There doesn't appear to be much evidence in this chart for increased attendance with higher expected total scoring, however, an assertion that the following chart supports.
Now there's clearly a lot of variability in attendance in those charts, and whilst Expected Margin might explain some of it, by no means does it explain all of it.
One obvious variable to investigate as a source for explaining more of the variability in attendance is Home team, since some teams are likely to attract higher or lower attendances when playing at home, regardless of how competitive they are.
We see here some quite different patterns of association between Expected Margin and Home team, with a number of teams - especially the non-Victorian ones - drawing similar crowds almost regardless of how competitive they were expected to be
Attendances, of course, are constrained by capacity at many venues, which suggests another dimension on which we might condition the analysis.
Here we consider only the 10 venues at which at least 50 home and away games have been played during the period we're analysing, and we again see a variety of relationships between attendance and expected margin, though the more frequently-used grounds - the MCG and Docklands - do show the inverted-U shape we saw in the first chart.
We could continue to do these partial analyses on single variables at a time, but if we're to come up with an estimate of the individual contribution of Expected Margin and Expected Total to attendance we'll need to build a statistical model.
For that purpose, today I'll be creating a Multivariate Adaptive Regression Spline model (using the earth package in R), which is particularly well-suited to fitting the type of non-linear relationship we're seeing between attendance and Expected Margin.
The target variable for the regression will be Attendance, and the regressors will be:
We'll allow the algorithm to explore interactions, but only between pairs of variables, and we'll stop the forward search when the R-squared increases by less than 0.001.
We obtain the model shown at right, the coefficients in which we interpret as follows:
Together, these last two terms create the relationship between attendance and Expected Margin that we saw earlier. The orange portion to the left of about a +3 Expected Margin applies to all games. For games where the Expected Margins is above about +3 points, the red portion applies if the game involves teams from different States or teams from the same State playing out of their home State (for example, in Wellington or at Marrara Oval), and the orange portion applies if the game involves teams from the same State playing in their home State (for example Sydney v GWS at the SCG).
Note that we obtain here not only the inverted-U shape, but also a relationship where attendance drops off more rapidly with negative Expected Margins than it does with positive Expected Margins.
There are a few more interaction terms in the model.
The overall fit of the model is quite good, with almost 80% of the variability in attendance figures being explained (the Generalised R-squared for the model, which provides an estimate of how well the model might be expected to fit other data drawn from a similar sample, is about 76%).
Diagnostic plots reveal that there is some heteroscedasticity, however, with larger errors for games with higher fitted attendance levels.
It could be that some systematic sources of error remains and that the fit could be improved by, for example, considering the criticality of a particular game in the context of the season or the availability or unavailability of key players. Weather too would doubtless play a role, and maybe even the quality of the other games in the round.
Nonetheless, this model seems a reasonable one for at least first-order estimations of the magnitudes and shapes of the relationships between attendance and Expected Margin, and between attendance and Expected Total score. Both Expected Margin and Expected Total have some influence, but the rate at which attendance varies with changes in depends on the specifics of the game being considered - in particular, who is playing whom, and where.
(This piece originally appeared in The Guardian newspaper as https://www.theguardian.com/sport/datablog/2017/jun/15/matter-of-stats-afl-datablog-offence-defence)
There are a lot of sportswriters and sports fans who are adamant that it’s impossible to compare sporting teams across seasons and eras. Taken literally, that’s a truism, because every sport evolves, and what worked for a great team in, say the 1980s, might not work – or even be within the rules of the game – today.
Still, if asked to identify some of the best AFL teams of recent years, most would almost certainly include the 2000 Essendon, 2011 Geelong, and 2012 Hawthorn teams. We all have, if nothing else, some intuitive sense of relative team abilities across time.
As imperfect as it is, one way of quantifying a team’s relative ability is to apply mathematics to the results it achieves, adjusting for the quality of the teams they faced. Adjustment for opponent quality is important because, were we to use just raw results, a 90-point thrashing of a struggling team would be treated no differently in our assessment of a team’s ability than a similar result against a talented opponent.
This notion of continuously rating individuals or teams using results adjusted for opposition quality has a long history and one version of the method can be traced back to Arpad Elo, who came up with it as a way of rating chess players as they defeated, drew or lost to other players of sometimes widely differing abilities. It’s still used for that purpose today.
In sports like football, Elo-style rating systems can be expanded to provide not just a single rating for a team, but a separate rating for its offensive and defensive abilities, the former based on the team’s record of scoring points relative to the quality of the defences it has faced, and the latter on its record of preventing points being scored relative to the quality of the offences it has faced.
If we do this for the AFL we can quantify the offensive and defensive abilities of teams within and across seasons using a common currency: points.
There are many ways to do this, and a number of websites offer their own versions, but the methodology we’ll use here has the following key characteristics:
(For more details see this blog post on the MoSHBODS Team Rating System)
Applying this methodology generates the data in the chart below, which records the offensive and defensive ratings of every team from seasons 2000 to 2016 as at the end of their respective home and away season. Teams that ultimately won the Flag are signified by dots coloured red, and those that finished as Runner Up as dots coloured orange. The grey dots are the other teams from each season – those that missed the Grand Final.
We see that teams lie mostly in the bottom-left and top-right quadrants, which tells us that teams from the modern era that have been above-average offensively have also tended to be above-average defensively, and conversely that below-average offensive teams have tended to be below-average defensively as well.
The level of association between teams’ offensive and defensive ratings can be measured using something called a correlation coefficient, which takes on values between -1 and +1. Negative values imply a negative association – say if strong offensive teams tended to be weak defensively and vice versa – while positive values imply a positive association, such as we see in the chart.
The correlation coefficients for team ratings in the current and previous eras appears in the table at right. We see that the degree of association between team offensive and defensive ratings has been at historically high levels in the modern era. In fact, it’s not been as high as this since the earliest days of the VFL.
In other words, teams offensive and defensive ratings have tended to be more similar than they have been different in the modern era.
By way of comparison, here’s the picture for the 1980 to 1999 era in which the weaker relationship between teams’ offensive and defensive ratings is apparent.
Note that the increase in correlation between teams’ offensive and defensive abilities in the modern era has not come with much of a reduction in the spread of team abilities. If we ignore the teams that are in the lowest and highest 5% on offensive and defensive abilities, the range of offensive ratings in the modern era span about 31 points and defensive ratings span about 34 points. For the 1980-1999 era the equivalent ranges are both about 2 points larger.
One plausible hypothesis for the cause of the closer association between the offensive and defensive abilities of modern teams would be that coaching and training methods have improved and served to reduce the level of independent variability in the two skill sets.
The charts for both eras have one thing in common, however: the congregation of Grand Finalists – the orange and red dots – in the north-eastern corner. This is as we might expect because this is the quadrant for teams that are above-average both offensively and defensively.
Only a handful of Grand Finalists in either era have finished their home and away season with below-average offensive or defensive ratings. And, in the modern era, just two teams have gone into the Finals with below-average defensive ratings - Melbourne 2000 and Port Adelaide 2007, both of which finished as runners up in their respective seasons.
Melbourne finished its home and away season conceding 100 points or more in 4 of its last 7 games, and conceding 98 and 99 points in two others. Those results took a collective toll on its defensive rating.
Port Adelaide ended their 2007 home and away season more positively but probably not as well as a team second on the ladder might have been expected to – an assessment that seems all the more reasonable given the Grand Final result just a few weeks later. In that 2007 Grand Final, Geelong defeated them by 119 points.
The chart for the modern era also highlights a few highly-rated teams that could consider themselves unlucky to have not made the Grand Final in their years – the Adelaide 2016 and St Kilda 2005 teams in particular, though that Saints’ rating was somewhat elevated by its 139-point thrashing of the Lions in the final home and away game of that season.
Based on the relatively small sample of successful teams shown in this chart, it’s difficult to come to any firm conclusions about the relative importance of offensive versus defensive ability for making Grand Finals and winning Flags, and impossible to say anything at all about their relative importance in getting a team to the finals in the first place.
To look at that issue we use the ratings in a slightly different way. Specifically, we use them to calculate the winning rates of teams classified on the basis of their offensive and defensive superiority or inferiority at the time of their clash.
Those calculations are summarised in the table below, which also groups games into eras to iron out season to season fluctuations and make underlying differences more apparent.
The percentages that are most interesting are those in the left-most column in each block.
They tell how successful teams have been that have found themselves stronger defensively but weaker offensively than their opponents.
What we find is that, in every era since WWII:
We should note though that none of the percentages are statistically significantly different from 50%, so we can’t definitively claim that, in any particular era, defensive superiority has been preferable to offensive superiority in the home and away season or that the opposite has been true in finals. That’s the clear tendency, but the evidence is statistically weak, so the differences we see might be no more than random noise.
In any case, the effect sizes we see are quite small – around 1 to 2% points – so practically it makes more sense to conclude that offensive and defensive abilities have been historically of roughly equal importance to a team’s success in home and away games and in finals.
So, where do the current crop of teams sit?
The chart below maps each of the 18 current teams’ ratings as at the end of Round 12 and the ratings of all 34 Grand Finalists from the period 2000-2016 as at the end of Round 12 in their respective years.
Adelaide stand alone offensively, with a rating almost as good as the 2000 Essendon team who were 12 and 0 after Round 12 having averaged just over 136 points per game in a year where the all-team average score was 103 points per team per game across the entire home and away season. The Dons scored then at a rate just over 30% higher than an average team.
This year, Adelaide are averaging just under 119 points per game in a season where the all-team average is just under 91 points per game, which is also about 30% higher. They are, clearly, a formidable team offensively, though they’ve yet to impress consistently defensively.
The 2017 Port Adelaide and GWS teams come next, both located just outside the crop of highest-rated Grand Finalists, and having combined ratings a little below Adelaide’s. This week’s loss to Essendon had a (quite reasonably) significant effect on Port Adelaide’s rating, as did GWS’ loss to Carlton.
Geelong, Collingwood, Sydney, Richmond and the Western Bulldogs are a little more south-east of that prime Flag-winner territory, and would require a few above-expectation performances in upcoming weeks to enter that area. The Bulldogs in particular would need to show a little more offensive ability to push into the group, though they had a similar rating at the same point last season, so who’s to say they need to do anything much more.
Collingwood’s relatively high rating might raise a few eyebrows, but they have, it should be noted, generated more scoring shots in their losses to the Western Bulldogs in Round 1 and Essendon in Round 5, and generated only four or fewer less scoring shots in their losses to Richmond in Round 2, St Kilda in Round 4, Carlton in Round 7, GWS in Round 8, and Melbourne in Round 12. They’re currently ranked 7th on combined rating.
Essendon, Melbourne and St Kilda form the next sub-group –rated slightly above average on combined rating but below almost all previous Grand Finalists at the equivalent point in the season.
No other team has a combined rating that is positive or that exceeds that of any Flag winner at this point in the season since 2000. As such, the remaining seven teams would make history were they to win the Flag.
Still, there’s a lot that can happen between now and the end of the season, as we can see in this final chart, which shows 2017 team ratings and the ratings of all non-Grand Finalists from the seasons from 2000 to 2016.
There are plenty of sides in the chart that were rated very highly at the end of Round 12 that never got as far as Grand Final day.
For example, the Geelong 2010 team was 10 and 2 after 12 rounds, one game clear at the head of the competition ladder with a 156 percentage. That team went 7 and 3 over the remainder of the home and away season to finish runners up in the minor premiership before being eliminated by the minor premiers, Collingwood, 120-79 in a Preliminary Final.
And, in any case, in a year where results have constantly surprised and where two wins currently separates 5th from 17th on the ladder, no team can reasonably feel assured of progressing into September, let alone to the MCG on the 30th.
I've spoken to quite a few fellow-modellers about the process of creating and optimising models for forecasting the results of AFL games. Often, the topic of what performance metric to optimise arises.
Read MoreWe've used the surprisal metric a number of times here on MoS as a measure of how surprised we're entitled to feel about a particular head-to-head result.
Read MoreThere are, clearly, a lot of people who are firmly convinced that some teams win a greater or lesser share of close games than they "should".
Read MoreI've been projecting final ladders during AFL seasons for at least five years now, where I take the current ladder and project the remainder of the season thousands of times to make inferences about which teams might finish where (here, for example, is a projection from last year). During that time, more than once I've wondered about whether the projections have incorporated sufficient variability - whether the results have been overly-optimistic for strong teams and unduly pessimistic for weak teams.
Read MoreA few weeks ago, I wrote a piece describing the construction of an in-running model for the final margin of an AFL game. Today, I'm going to use the same data set (viz, score progression data from the www.afltables.com website, covering every score in every AFL game from 2008 to 2016) to construct a different in-running model, this one to project the final total score.
Read MoreOnly a few times in my professional career as a data scientist have I had the opportunity to use mathematical graph theory, but the technique has long fascinated me.
Briefly, the theory involves "nodes", which are entities like books, teams or streets, and "vertices", which signify relationships between the nodes - such as, in the books example, having the same author. Vertices can denote present/absent relationships such as friendship, or they can denote cardinality such as the number of times a pair of teams have played. Where the relationships between nodes is between them and not from one to the other (eg friendships), the vertices are said to be undirected; where they flow from one node to another they're said to be directed (eg Team A defeated Team B).
Read MoreIn the previous post we looked at the calibration of two in-running probability models across the entire span of the contest. For one of those models I used a bookmaker's pre-game head-to-head prices to establish credible pre-game assessments of teams' chances.
Read MoreThis week, there's been a lot of Twitter-talk about the use of in-running probability models, inspired in part no doubt by the Patriots' come-from-behind victory in the Superbowl after some models had estimated their in-running probability as atom-close to zero.
Read MoreThe analysis used in this blog was originally created as part of a Twitter conversation about the ability of good teams to "win the close ones" (a topic we have investigated before here on MoS - for example in this post and in this one). As a first step in investigating that question, I thought it would be useful to create a cross-tab of historical V/AFL results based on the final margin in each game and the level of pre-game favouritism.
Read MoreLast year, predictions based on the MoSSBODS Team Rating System proved themselves to be, in Aussie parlance, "fairly useful". MoSSBODS correctly predicted 73% of the winning teams, recorded a mean absolute error (MAE) of 30.2 points per game, its opinions guiding the Combined Portfolio to a modest profit for the year. If it had a major weakness, it was in its head-to-head probability assessments, which, whilst well-calibrated in the early part of the season, were at best unhelpful from about Round 5 onwards.
Read MoreWith FMI today posting its assessment of the 2017 AFL draw, we now have (at least) the following comparable analyses:
Read MoreI've seen it written that the best blog posts are self-contained. But as this is the third year in a row where I've used essentially the same methodology for analysing the AFL draw for the upcoming season, I'm not going to repeat the methodological details here. Instead, I'll politely refer you to this post from last year, and, probably more relevantly, this one from the year before if you're curious about that kind of thing. Call me lazy - but at least this year you're getting the blog post in October rather than in November or December.
Read More(This piece originally appeared in the Guardian, and revisits the topic of defining a typology for Grand Finals, which I first looked at in 2009 where I came up with a similar solution, and again in 2014 where I used a fuzzy clustering approach.)
For fans, even casual ones, AFL Grand Finals are special, and each etches its own unique, defining legacy on the collective football memory.
Read MoreMAFL is a website for ...
(For those not wanting to use PayPal, my email address below is now also a PayID)