VFL/AFL Home-and-Away Season Team Analysis

This year, Sydney collected its 8th Minor Premiership (including its record when playing as South Melbourne) drawing it level with Richmond in 7th place on the all-time list. That list is headed by Collingwood, whose 19 Minor Premierships have come from from the 118 seasons, one season more than Sydney/South Melbourne and 11 more than Richmond.  

Read More

Explaining Variability in Game Margins

Some seasons are notable for the large number of blowout victories they force us to endure - a few recent seasons come immediately to mind - while others are more memorable because of their highly competitive nature. To what extent, I've often wondered, could we attribute a season full of sizable victory margins to the fact that strong teams were more often facing weak teams, making the magnitude of the defeats predictable if still lamentable, versus instead attributing them to on-the-day or random events that were genuinely unforeseeable pre-game?

Read More

Does an Extra Day's Rest Matter in the Home and Away Season?

Whenever the draw for a new season is revealed there's much discussion about the teams that face one another only once, about which teams need to travel interstate more than others, and about which teams are asked to play successive games with fewer days rest. There is in the discussion an implicit assumption that more days rest is better than fewer days rest but, to my knowledge, this is never supported by empirical analysis. It is, like much of the discussion about football, considered axiomatic. In this blog we'll assess how reasonable that assumption is.
Read More

Lead Changes as a Measure of Game Competitiveness

The final victory margin is one measure of how close a contest was, but it can sometimes mislead when the team that's in front midway through the final term piles on a slew of late goals against a progressively more demoralised opponent, improving its percentage in so doing, but also erasing any trace of the fact that the game might have been a close-run thing throughout the first three-and-a-half or more quarters.
Read More

Characterising AFL Seasons

I can think of a number of ways that an AFL season might be characterised but for today's blog I'm going to call on a modelling approach that I used back in 2010, which is based on Brownian motion and which was inspired by a JASA paper from Hal S Stern.
Read More

Applying the Win Production Functions to 2009 to 2011

In the previous blog I came up with win production functions for the AFL - ways of estimating a team's winning percentage on the basis of the difference between the scoring shots it produces and those it allows its opponents to create, and the difference between the rate at which it converts those scoring shots and the rate at which its opponents convert them.
Read More

Grand Final Margins Through History and a Last Look at the 2010 Home-and-Away Season

A couple of final charts before GF 2.0.

The first chart looks at the history of Grand Finals, again. Each point in the chart reflects four things about the Grand Final to which it pertains ...
Read More

Grand Finals: Points Scoring and Margins

How would you characterise the Grand Finals that you've witnessed? As low-scoring, closely fought games; as high-scoring games with regular blow-out finishes; or as something else?

First let's look at the total points scored in Grand Finals relative to the average points scored per game in the season that immediately preceded them.

GF_PPG.png

Apart from a period spanning about the first 25 years of the competition, during which Grand Finals tended to be lower-scoring affairs than the matches that took place leading up to them, Grand Finals have been about as likely to produce more points than the season average as to produce fewer points.

One way to demonstrate this is to group and summarise the Grand Finals and non-Grand Finals by the decade in which they occurred.

GF_PPG_CHT.png

There's no real justification then, it seems, in characterising them as dour affairs.

That said, there have been a number of Grand Finals that failed to produce more than 150 points between the two sides - 49 overall, but only 3 of the last 30. The most recent of these was the 2005 Grand Final in which Sydney's 8.10 (58) was just good enough to trump the Eagles' 7.12 (54). Low-scoring, sure, but the sort of game for which the cliche "modern-day classic" was coined.

To find the lowest-scoring Grand Final of all time you'd need to wander back to 1927 when Collingwood 2.13 (25) out-yawned Richmond 1.7 (13). Collingwood, with efficiency in mind, got all of its goal-scoring out of the way by the main break, kicking 2.6 (20) in the first half. Richmond, instead, left something in the tank, going into the main break at 0.4 (4) before unleashing a devastating but ultimately unsuccessful 1.3 (9) scoring flurry in the second half.

That's 23 scoring shots combined, only 3 of them goals, comprising 12 scoring shots in the first half and 11 in the second. You could see that many in an under 10s soccer game most weekends.

Forty-five years later, in 1972, Carlton and Richmond produced the highest-scoring Grand Final so far. In that game, Carlton 28.9 (177) held off a fast-finishing Richmond 22.18 (150), with Richmond kicking 7.3 (45) to Carlton's 3.0 (18) in the final term.

Just a few weeks earlier these same teams had played out an 8.13 (63) to 8.13 (63) draw in their Semi Final. In the replay Richmond prevailed 15.20 (110) to Carlton's 9.15 (69) meaning that, combined, the two Semi Finals they played generated 22 points fewer than did the Grand Final.

From total points we turn to victory margins.

Here too, again save for a period spanning about the first 35 years of the competition during which GFs tended to be closer fought than the average games that had gone before them, Grand Finals have been about as likely to be won by a margin smaller than the season average as to be won by a greater margin.

GF_MPG.png

Of the 10 most recent Grand Finals, 5 have produced margins smaller than the season average and 5 have produced greater margins.

Perhaps a better view of the history of Grand Final margins is produced by looking at the actual margins rather than the margins relative to the season average. This next table looks at the actual margins of victory in Grand Finals summarised by decade.

GF_MOV.png

One feature of this table is the scarcity of close finishes in Grand Finals of the 1980s, 1990s and 2000s. Only 4 of these Grand Finals have produced a victory margin of less than 3 goals. In fact, 19 of the 29 Grand Finals have been won by 5 goals or more.

An interesting way to put this period of generally one-sided Grand Finals into historical perspective is provided by this, the final graphic for today.

GF_MOV_PC.png

They just don't make close Grand Finals like they used to.

And the Last Shall be First (At Least Occasionally)

So far we've learned that handicap-adjusted margins appear to be normally distributed with a mean of zero and a standard deviation of 37.7 points. That means that the unadjusted margin - from the favourite's viewpoint - will be normally distributed with a mean equal to minus the handicap and a standard deviation of 37.7 points. So, if we want to simulate the result of a single game we can generate a random Normal deviate (surely a statistical contradiction in terms) with this mean and standard deviation.

Alternatively, we can, if we want, work from the head-to-head prices if we're willing to assume that the overround attached to each team's price is the same. If we assume that, then the home team's probability of victory is the head-to-head price of the underdog divided by the sum of the favourite's head-to-head price and the underdog's head-to-head price.

So, for example, if the market was Carlton $3.00 / Geelong $1.36, then Carlton's probability of victory is 1.36 / (3.00 + 1.36) or about 31%. More generally let's call the probability we're considering P%.

Working backwards then we can ask: what value of x for a Normal distribution with mean 0 and standard deviation 37.7 puts P% of the distribution on the left? This value will be the appropriate handicap for this game.

Again an example might help, so let's return to the Carlton v Geelong game from earlier and ask what value of x for a Normal distribution with mean 0 and standard deviation 37.7 puts 31% of the distribution on the left? The answer is -18.5. This is the negative of the handicap that Carlton should receive, so Carlton should receive 18.5 points start. Put another way, the head-to-head prices imply that Geelong is expected to win by about 18.5 points.

With this result alone we can draw some fairly startling conclusions.

In a game with prices as per the Carlton v Geelong example above, we know that 69% of the time this match should result in a Geelong victory. But, given our empirically-based assumption about the inherent variability of a football contest, we also know that Carlton, as well as winning 31% of the time, will win by 6 goals or more about 1 time in 14, and will win by 10 goals or more a litle less than 1 time in 50. All of which is ordained to be exactly what we should expect when the underlying stochastic framework is that Geelong's victory margin should follow a Normal distribution with a mean of 18.8 points and a standard deviation of 37.7 points.

So, given only the head-to-head prices for each team, we could readily simulate the outcome of the same game as many times as we like and marvel at the frequency with which apparently extreme results occur. All this is largely because 37.7 points is a sizeable standard deviation.

Well if simulating one game is fun, imagine the joy there is to be had in simulating a whole season. And, following this logic, if simulating a season brings such bounteous enjoyment, simulating say 10,000 seasons must surely produce something close to ecstasy.

I'll let you be the judge of that.

Anyway, using the Wednesday noon (or nearest available) head-to-head TAB Sportsbet prices for each of Rounds 1 to 20, I've calculated the relevant team probabilities for each game using the method described above and then, in turn, used these probabilities to simulate the outcome of each game after first converting these probabilities into expected margins of victory.

(I could, of course, have just used the line betting handicaps but these are posted for some games on days other than Wednesday and I thought it'd be neater to use data that was all from the one day of the week. I'd also need to make an adjustment for those games where the start was 6.5 points as these are handled differently by TAB Sportsbet. In practice it probably wouldn't have made much difference.)

Next, armed with a simulation of the outcome of every game for the season, I've formed the competition ladder that these simulated results would have produced. Since my simulations are of the margins of victory and not of the actual game scores, I've needed to use points differential - that is, total points scored in all games less total points conceded - to separate teams with the same number of wins. As I've shown previously, this is almost always a distinction without a difference.

Lastly, I've repeated all this 10,000 times to generate a distribution of the ladder positions that might have eventuated for each team across an imaginary 10,000 seasons, each played under the same set of game probabilities, a summary of which I've depicted below. As you're reviewing these results keep in mind that every ladder has been produced using the same implicit probabilities derived from actual TAB Sportsbet prices for each game and so, in a sense, every ladder is completely consistent with what TAB Sportsbet 'expected'.

Simulated Seasons.png

The variability you're seeing in teams' final ladder positions is not due to my assuming, say, that Melbourne were a strong team in one season's simulation, an average team in another simulation, and a very weak team in another. Instead, it's because even weak teams occasionally get repeatedly lucky and finish much higher up the ladder than they might reasonably expect to. You know, the glorious uncertainty of sport and all that.

Consider the row for Geelong. It tells us that, based on the average ladder position across the 10,000 simulations, Geelong ranks 1st, based on its average ladder position of 1.5. The barchart in the 3rd column shows the aggregated results for all 10,000 simulations, the leftmost bar showing how often Geelong finished 1st, the next bar how often they finished 2nd, and so on.

The column headed 1st tells us in what proportion of the simulations the relevant team finished 1st, which, for Geelong, was 68%. In the next three columns we find how often the team finished in the Top 4, the Top 8, or Last. Finally we have the team's current ladder position and then, in the column headed Diff, a comparison of the each teams' current ladder position with its ranking based on the average ladder position from the 10,000 simulations. This column provides a crude measure of how well or how poorly teams have fared relative to TAB Sportsbet's expectations, as reflected in their head-to-head prices.

Here are a few things that I find interesting about these results:

  • St Kilda miss the Top 4 about 1 season in 7.
  • Nine teams - Collingwood, the Dogs, Carlton, Adelaide, Brisbane, Essendon, Port Adelaide, Sydney and Hawthorn - all finish at least once in every position on the ladder. The Bulldogs, for example, top the ladder about 1 season in 25, miss the Top 8 about 1 season in 11, and finish 16th a little less often than 1 season in 1,650. Sydney, meanwhile, top the ladder about 1 season in 2,000, finish in the Top 4 about 1 season in 25, and finish last about 1 season in 46.
  • The ten most-highly ranked teams from the simulations all finished in 1st place at least once. Five of them did so about 1 season in 50 or more often than this.
  • Every team from ladder position 3 to 16 could, instead, have been in the Spoon position at this point in the season. Six of those teams had better than about a 1 in 20 chance of being there.
  • Every team - even Melbourne - made the Top 8 in at least 1 simulated season in 200. Indeed, every team except Melbourne made it into the Top 8 about 1 season in 12 or more often.
  • Hawthorn have either been significantly overestimated by the TAB Sportsbet bookie or deucedly unlucky, depending on your viewpoint. They are 5 spots lower on the ladder than the simulations suggest that should expect to be.
  • In contrast, Adelaide, Essendon and West Coast are each 3 spots higher on the ladder than the simulations suggest they should be.

(In another blog I've used the same simulation methodology to simulate the last two rounds of the season and project where each team is likely to finish.)

From One Year To The Next: Part 2

Last blog I promised that I'd take another look at teams' year-to-year changes in ladder position, this time taking a longer historical perspective.

For this purpose I've elected to use the period 1925 to 2008 as there have always been at least 10 teams in the competition from that point onwards. Once again in this analysis I've used each team's final ladder position, not their ladder position as at the end of the home and away season. Where a team has left or joined the competition in a particular season, I've omitted its result for the season in which it came (since there's no previous season) or went (since there's no next season).

As the number of teams making the finals has varied across the period we're considering, I'll not be drawing any conclusions about the rates of teams making or missing the finals. I will, however, be commenting on Grand Final participation as each season since 1925 has culminated in such an event.

Here's the raw data:

Ladder_Change_Val_25_08.png

(Note that I've grouped all ladder positions of 9th or lower in the "9+" category. In some years this incorporates just two ladder positions, in others as many as eight.)

A few things are of note in this table:

  • Losing Grand Finalists are more likely than winning Grand Finalists to win in the next season.
  • Only 10 of 83 winning Grand Finalists finished 6th or lower in the previous season.
  • Only 9 of 83 winning Grand Finalists have finished 7th or lower in the subsequent season.
  • The average ladder position of a team next season is highly correlated with its position in the previous season. One notable exception to this tendency is for teams finishing 4th. Over one quarter of such teams have finished 9th or worse in the subsequent season, which drags their average ladder position in the subsequent year to 5.8, below that of teams finishing 5th.
  • Only 2 teams have come from 9th or worse to win the subsequent flag - Adelaide, who won in 1997 after finishing 12th in 1996; and Geelong, who won in 2007 after finishing 10th in 2006.
  • Teams that finish 5th have a 14-3 record in Grand Finals that they've made in the following season. In percentage terms this is the best record for any ladder position.

Here's the same data converted into row percentages.

Ladder_Change_PC_25_08.png

Looking at the data in this way makes a few other features a little more prominent:

  • Winning Grand Finalists have about a 45% probability of making the Grand Final in the subsequent season and a little under a 50% chance of winning it if they do.
  • Losing Grand Finalists also have about a 45% probability of making the Grand Final in the subsequent season, but they have a better than 60% record of winning when they do.
  • Teams that finish 3rd have about a 30% chance of making the Grand Final in the subsequent year. They're most likely to be losing Grand Finalists in the next season.
  • Teams that finish 4th have about a 16% chance of making the Grand Final in the subsequent year. They're most likely to finish 5th or below 8th. Only about 1 in 4 improve their ladder position in the ensuing season.
  • Teams that finish 5th have about a 20% chance of making the Grand Final in the subsequent year. These teams tend to the extremes: about 1 in 6 win the flag and 1 in 5 drops to 9th or worse. Overall, there's a slight tendency for these teams to drop down the ladder.
  • Teams that finish 6th or 7th have about a 20% chance of making the Grand Final in the subsequent year. Teams finishing 6th tend to drop down the ladder in the next season; teams finishing 7th tend to climb.
  • Teams that finish 8th have about a 8.5% chance of making the Grand Final in the subsequent year. These teams tend to climb in the ensuing season.
  • Teams that finish 9th or worse have about a 3.5% chance of making the Grand Final in the subsequent year. They also have a roughly 2 in 3 chance of finishing 9th or worse again.

So, I suppose, relatively good news for Cats fans and perhaps surprisingly bad news for St Kilda fans. Still, they're only statistics.

From One Year To The Next: Part 1

With Carlton and Essendon currently sitting in the top 8, I got to wondering about the history of teams missing the finals in one year and then making it the next. For this first analysis it made sense to choose the period 1997 to 2008 as this is the time during which we've had the same 16 teams as we do now.

For that period, as it turns out, the chances are about 1 in 3 that a team finishing 9th or worse in one year will make the finals in the subsequent year. Generally, as you'd expect, the chances improve the higher up the ladder that the team finished in the preceding season, with teams finishing 11th or higher having about a 50% chance of making the finals in the subsequent year.

Here's the data I've been using for the analysis so far:

Ladder_Change_Val_97_08.png

And here's that same data converted into row percentages and grouping the Following Year ladder positions.

Ladder_Change_PC_97_08.png

Note that in these tables I've used each team's final ladder position, not their ladder position as at the end of the home and away season. So, for example, Geelong's 2008 ladder position would be 2nd, not 1st.

Teams that make the finals in a given year have about a 2 in 3 chance of making the finals in the following year. Again, this probability tends to increase with higher ladder position: teams finishing in the top 4 places have a better than 3 in 4 record for making the subsequent year's finals.

One of the startling features of these tables is just how much better flag winners perform in subsequent years than do teams from any other position. In the first table, under the column headed "Ave" I've shown the average next-season finishing position of teams finishing in any given position. So, for example, teams that win the flag, on average, finish in position 3.5 on the subsequent year's ladder. This average is bolstered by the fact that 3 of the 11 (or 27%) premiers have gone back-to-back and 4 more (another 36%) have been losing Grand Finalists. Almost 75% have finished in the top 4 in the subsequent season.

Dropping down one row we find that the losing Grand Finalist from one season fares much worse in the next season. Their average ladder position is 6.6, which is over 3 ladder spots lower than the average for the winning Grand Finalist. Indeed, 4 of the teams that finished 2nd in one season missed the finals in the subsequent year. This is true of only 1 winning Grand Finalist.

In fact, the losing Grand Finalists don't tend to fare any better than the losing Preliminary Finalists, who average positions 6.0 (3rd) and 6.8 (4th).

The next natural grouping of teams based on average ladder position in the subsequent year seems to be those finishing 5th through 11th. Within this group the outliers are teams finishing 6th (who've tended to drop 3.5 places in the next season) and teams finishing 9th (who've tended to climb 1.5 places).

The final natural grouping includes the remaining positions 12th through 16th. Note that, despite the lowly average next-year ladder positions for these teams, almost 15% have made the top 4 in the subsequent year.

A few points of interest on the first table before I finish:

  • Only one team that's finished below 6th in one year has won the flag in the next season: Geelong, who finished 10th in 2006 and then won the flag in 2007
  • The largest season-to-season decline for a premier is Adelaide's fall from the 1998 flag to 13th spot in 1999.
  • The largest ladder climb to make a Grand Final is Melbourne's rise from 14th in 1999 to become losing Grand Finalists to Essendon in 2000.

Next time we'll look at a longer period of history.