The 2014 Seasons We Might Have Had
Each week the TAB Bookmaker forms opinions about the likely outcome of upcoming AFL matches and conveys those opinions to bettors in the form of market prices. If we assume that these opinions are unbiased reflections of the true likely outcomes, how might the competition ladder have differed from what we have now?
THE METHODOLOGY
To answer that question I've simulated the season to the end of Round 19 by taking the following steps:
- Take the TAB Bookmaker's pre-game prices and convert them into victory probabilities using the Overround Equalising approach
- Convert these probabilities into expected victory margins by assuming that such margins come from a Normal Distribution with a standard deviation of 36 points (ie find the mean required such that a Normal Distribution with that mean and a standard deviation of 36 points would yield a value greater than 0 with the probability estimated for the game)
- Run 25,000 simulations of the entire season deriving the margin for each game using the expected margins derived in Step 2 and assuming that the actual result comes from a (discretised) Normal Distribution with a mean equal to the expected margin and a standard deviation of 36 points.
- Convert these margins into actual scores for the home and the away team by assuming that the total score for each game comes from a Normal Distribution with mean of 183.4 points and a standard deviation of 31.7 points.
In place of Steps 1 and 2 we might instead use the negative of the start being offered by he TAB Bookmaker in the Line market, but this is complicated by the fact that starts are sometimes distorted for games with near-equal favourites.
Essentially, what we're doing is converting the TAB Bookmaker's opinions into probabilistic statements about the final score of each game and then playing out the entire season so far 25,000 times based on stochastic realisations of those opinions given certain distributional assumptions about total scores and game margins.
THE RESULTS
The manhattan-style barcharts for each team's final ranking across the 25,000 simulations have a pleasing aesthetic about them (ie I think they look good).
One surprising aspect of them is the range of ladder positions they span for most teams - yet another reminder, perhaps, of the significant role that luck plays in the outcome of any season.
As a stark example of this consider the fact that, in about 6% of the simulations - which, remember, are based solely on the TAB Bookmaker's considered opinions about team's genuine victory chances in every game at the time that game was about to take place - Sydney sit outside the Top 8. That was also the case for Hawthorn in about 4% of simulated seasons.
Further evidence of this spread is the fact that eight teams finished at least one simulation placed in each of the 18 possible positions, and that every team finished in at least seven - and in most cases ten - different positions in 1% or more of the simulated seasons.
One team ordering after Round 19, entirely consistent with the TAB Bookmaker's probabilistic opinions as realised in a single simulation, is this one:
- Essendon
- Richmond
- Port Adelaide
- Hawthorn
- Adelaide
- Geelong
- Fremantle
- Sydney
- West Coast
- Kangaroos
- Collingwood
- Gold Coast
- Carlton
- St Kilda
- Western Bulldogs
- Brisbane Lions
- Melbourne
- GWS
I've not heard much speculation this season about a Dons v Tigers GF.
Whilst it's fun - and illuminating - to reflect on and acknowledge the reality that the season could have panned out very differently than it has without the need to make any changes in our assumptions about the relative strengths of the 18 teams, it's also interesting to compare the macro characteristics of the simulations for each team and consider how they compare with the actual competition ladder as it now stands.
Five teams stand out as having quite different actual versus average simulated ladder positions:
- Fremantle: sit 4th on the ladder but were placed 1st on the simulated ladders more often than any other team
- Essendon: sit 7th on the ladder but made the Top 8 in simulated ladders only 37% of the time, less often than Adelaide, West Coast and Richmond, who sit 10th, 11th and 12th respectively
- Collingwood: sit 8th on the ladder but were placed in 1st, the Top 4 and the Top 8 more often than the Roos and Essendon, who both sit above them on the competition ladder
- West Coast: sit 11th in the ladder but came somewhere in the Top 8 in 44% of simulations, more often than Essendon, Gold Coast and Adelaide, who all sit higher on the competition ladder
- Richmond: sit 12th on the ladder but came somewhere in the Top 8 in 38% of simulations, more often than Essendon and the Gold Coast, who both sit higher on the competition ladder
In addition to looking at the simulations from the viewpoint of a single team, we can also choose to review the simulated Top 2s, Top 4s and Top 8s.
There is though, it turns out, so much variability in a season when viewed 19 rounds into it that it makes little sense to talk about the most likely Top 8. No ordering of the teams for the Top 8 occurred in more than 4 of the 25,000 simulations.
Twelve pairings for the Top 2 positions, however, emerged in about 1 simulation in 30, the most common being for a Fremantle-Hawthorn ordering, which emerged about 7% of the time. Those same teams but in the reverse order accounted for a further 6.7% of simulations.
Top 4s were also influenced by the cumulative effects of 19 rounds of randomness so the most-common ordering, Fremantle-Hawthorn-Sydney-Geelong, emerged in only about 1 simulation in 150. Another 10 orderings appeared in at least 1 simulation in 200.
CONCLUSION
There are, I think, at least a couple of ways to interpret these results:
- The TAB Bookmaker is a near-perfect assessor of team chances and these are accurately reflected in his head-to-head prices and best simulated by making the various assumptions I've made about Overround imposition and the statistical distributions of various game aspects
- There are frequent and sizeable inaccuracies in the TAB Bookmaker's assessments of team chances and/or my methodology for reflecting and simulating them is significantly flawed. Teams that win do so largely because they were the better team on the day by a degree reflected in the final scores, and regardless of any pre-game opinions that might have been held by the TAB Bookmaker. God does not play dice with the football universe, if you like.
If you favour the first interpretation then logic compels you to recognise the considerable influence that random elements play in the shaping of the competition ladder. There are, as we've shown here, plausible but vastly different team orderings that could have emerged and that are completely consistent with a random realisation of the season.
If, instead, you favour the second interpretation, then you can considerably downplay or even eliminate the role of chance, and believe that the current competition ladder is close to or exactly the only one that was possible.
Both of those positions are, of course, caricatures, and most people, I suspect would fall somewhere between them. Personally, I lean much more towards the first interpretation than the second, but there's no way I can think of to categorically rule one or other interpretation out nor to arrive at some sort of weighted average of them that reflects empirical reality.
It's fun to think about though.