A Look At Grand Final History Through MoSHBODS' Eyes

Since the first VFL Grand Final of 1898 (there was none in the competition's first year, 1897) there have been 120 more, 117 of them providing an outcome and three finishing in draws and necessitating replays.

In today's blog we'll look at each of the Grand Finalists' pre-game offensive and defensive ratings according to the MoSHBODS Team Rating System, and the forecasts of team scores and game margins made using that System. You can read about the details of this System in this blog, but the key things to know are that:

  • A team with a positive offensive rating is expected to score more points than an average team from its era when playing an average team at a neutral venue. 
  • A team with a positive defensive rating is expected to concede fewer points than an average team from its era when playing an average team at a neutral venue
  • Ratings are measured in points, so, for example, a team with a +5 offensive rating is expected to score three points more than a team with a +2 offensive rating
  • Each team has, for every venue, a "Venue Performance Value" or VPV, also calibrated in points, which measures how much better (or worse) it performs, on average, at that Venue after accounting for the relative quality of the teams it has played there
  • The expected margin of victory for a team is equal to the sum of its own rating on offence and defence, less the sum of its opponent's rating on offence and defence, plus the difference between its own and its opponent's VPV

Since the average number of points scored by teams across history has varied (see, for example, this blog) being X points better than an average opponent has implied different chances of victory in different eras. For example, a 20-point better team would be more likely to win in an era where average total scores were around 120 points per game than in an era where average totals are nearer 180 points per game.

Given that, it makes sense to perform the analyses on an era-by-era basis, rather than on an all-time basis. For today's blog we'll define four eras of V/AFL history:

  • Era 1: 1898 to 1929
  • Era 2: 1930 to 1959
  • Era 3: 1960 to 1989
  • Era 4: 1990 to 2017

We begin by looking at the first of those eras.

Era 1: 1898 to 1929

In the 31 Grand Finals from this era, the average victory margin was just 17.9 points and the average team score just 49 points. The largest victory was Melbourne's 119-62 win over Collingwood in 1926. That was also the only Grand Final during the era in which a team scored more than 100 points.

During this period there were only four teams who went into their Grand Final with a negative offensive rating. All four lost. Two winners, but no losers, went in with a negative defensive rating.

More generally though, it was superior defensive ability that was rewarded slightly more often than superior offensive ability in that:

  • 58% of winners had a superior offensive rating going into the Grand Final
  • 61% of winners had a superior defensive rating going into the Grand Final
  • 58% of winners had a superior combined (offensive plus defensive) rating going into the Grand Final

Net VPVs - the sum of the teams' individual VPVs - in this era were generally small, the average absolute value just 3.1 points per game Only for the Fitzroy v Carlton 1904 Grand Final did the net VPV rise into double digits and in 15 of the 31 contests the absolute size of the net VPV was under 2 points.

In only three games was the Net VPV large enough to switch the forecast to a victory by the lower-rated team:

  • 1904 Fitzroy v Carlton (Fitzroy: Net Rating Advantage -2.6, Net VPV: +11.5)
  • 1906 Carlton v Fitzroy (Carlton: Net Rating Advantage +2.7, Net VPV: -8.7)
  • 1922 Fitzroy v Collingwood (Fitzroy: Net Rating Advantage -0.2, Net VPV: +0.3)

For two of those three games the switch led to a correct prediction.

MoSHBODS' average forecast margin for this period was 6.6 points and its average forecast team score 57.6 points per team. It selected the winner in 61% of Grand Finals with a mean absolute error (MAE) of 17.2 points in the forecasted margins.

Details of MoSHBODS' forecasts for the era appear below and include:

  • The 1912 Essendon team, which had the lowest defensive rating of any winning team in a Grand Final (-2.4)
  • The 1917 Fitzroy team, the only team to enter a Grand Final with a negative combined rating (Off: -3.4, Def: +2.5, Comb: -0.9)
  • The 1904 Fitzroy team, which had the lowest combined rating of any team (Off: +2.3, Def: -0.4, Comb: +1.9)

In that 1904 Grand Final, Fitzroy defeated Carlton whose combined rating was only +4.6, which meant that this Grand Final had the lowest aggregate combined rating of any Grand Final (+6.5)

According to MoSHBODS, the biggest upsets in this period (defined as losses by teams with the largest expected victory margins) were:

  • 1903: Collingwood 31 d Fitzroy 29 (Expected Score 40-53)
  • 1916: Fitzroy 85 d Carlton 56 (Expected Score 57-70)
  • 1928: Collingwood 96 d Richmond 63 (Expected Score 72-84)

Era 2: 1930 to 1959

In the 31 Grand Finals from this era, which includes a drawn result in 1948, the average victory margin rose sharply to 33.0 points and the average team score rose to 83.6 points per team. The largest victories were both of 73 points, Essendon over Melbourne by 150 to 87 in 1946, and Melbourne over Collingwood by 121 to 48 in 1956.

Only two teams who went into their Grand Final with a negative offensive rating. One lost and one won. Three went in with negative defensive ratings, two of which lost.

More generally though, in this era as well, defensive superiority was more often rewarded than offensive superiority, since:

  • 53% of winners had a superior offensive rating going into the Grand Final
  • 63% of winners had a superior defensive rating going into the Grand Final
  • 63% of winners had a superior combined (offensive plus defensive) rating going into the Grand Final

Net VPVs were, on average, slightly smaller than in the preceding era and never more than 9 points. The average net absolute VPV for the era was 2.4 points per game and, in 17 of the 31 games the net absolute VPV was less than 2 points.

In four games, the Net VPV was enough to switch the forecast to a victory by the lower-rated team:

  • 1933 South Melbourne v Richmond (Sth Melb: Net Rating Advantage -0.23, Net VPV: +0.24)
  • 1938 Carlton v Collingwood (Carlton: Net Rating Advantage +0.9, Net VPV: -3.0)
  • 1945 Carlton v South Melbourne (Carlton: Net Rating Advantage -7.8, Net VPV: +8.6)
    Note that this game was played at Princes Park.
  • 1954 Footscray v Melbourne (Carlton: Net Rating Advantage +0.7, Net VPV: -5.1)

For two of the four games, the switch led to a correct prediction.

(Note that we exclude the drawn Grand Final in the chart above.)

MoSHBODS' average forecast margin for this period was 8.2 points and its average forecast team score 83.8 points per team. It selected the winner in 63% of Grand Finals with an MAE of 29.9 points in the forecasted margins.

The details of MoSHBODS' forecasts for this era appear below.

The biggest upsets in this period were:

  • 1958: Collingwood 82 d Melbourne 64 (Expected Score 71-89)
  • 1934: Richmond 128 d South Melbourne 89 (Expected Score 82-94)

Era 3: 1960 to 1989

In the Grand Finals from this era, which include a drawn result in 1977, the average victory margin remained steady at 32.7 points per game and the average team score rose to 94.4 points. The largest victory came in 1988 when Hawthorn defeated Melbourne 152 to 56. 

Negative offensive ratings were slightly more prevalent in this era, with three winners and five losers going into their Grand Finals with such ratings, including the 1981 Carlton and Collingwood teams, who both had negative ratings, and North Melbourne, who had a negative rating going into the drawn 1977 Grand Final, and into the subsequent replay.

Only three teams had negative defensive ratings and they were all losers.

Defensive superiority was rewarded even more often than offensive superiority in this era, since:

  • 57% of winners had a superior offensive rating going into the Grand Final
  • 67% of winners had a superior defensive rating going into the Grand Final
  • 67% of winners had a superior combined (offensive plus defensive) rating going into the Grand Final

Net VPVs, on average, rose fractionally in this period, but the average absolute value was still only 2.5 points per game. In 16 of the 31 contests, the absolute net VPV was less than 2 points.

In three games, the Net VPV was enough to switch the forecast to a victory by the lower-rated team:

  • 1965 Essendon v St Kilda (Essendon: Net Rating Advantage -1.2, Net VPV: +7.8)
  • 1968 Carlton v Essendon (Carlton: Net Rating Advantage +2.5, Net VPV: -3.2)
  • 1973 Richmond v Carlton (Richmond: Net Rating Advantage -2.2, Net VPV: +3.6)

For two of those three games, the switch resulted in a correct prediction.

(Note that we exclude the drawn Grand Final in the chart above.)

MoSHBODS' average forecast margin for this period was 8.8 points and its average forecast team score 91.9 points per team. It selected the winner in 70% of Grand Finals with an MAE of 29.7 points in the forecasted margins.

Details of MoSHBODS' forecasts appear below, and include:

  • the 1967 Richmond team, which had the highest offensive rating of any team entering a Grand Final (+28.8)
  • the 1989 Geelong team, which had the highest offensive rating of any losing team in a Grand Final (+28.6)
  • the 1972 Richmond team, which had the lowest defensive rating of any team in a Grand Final (-5.1)
  • the 1988 Melbourne team, which had the lowest offensive rating of any team in a Grand Final (-7.0)

The biggest upsets in this period were:

  • 1970: Carlton 111 d Collingwood 101 (Expected Score 91-106)
  • 1984: Essendon 105 d Hawthorn 81 (Expected Score 105-117)

Era 4: 1990 to 2017

In the 29 Grand Finals from this era, which include a drawn Grand Final in 2010 and a yet-to-be-played 2017 Grand Final, the average victory margin has risen to 39.1 points and the average team score dropped marginally to 92.9 points. The largest victory in this and in any era came in 2007 when Geelong defeated Port Adelaide 163 to 44. 

Negative offensive ratings have been quite prevalent in this era, too, with four winners and five losers going into their Grand Finals in this situation. But, for the first time in any era, no team has gone into a Grand Final with a negative defensive rating.

Unlike previous eras, offensive superiority has been more often rewarded than defensive superiority in this latest era, since:

  • 59% of winners had a superior offensive rating going into the Grand Final
  • 48% of winners had a superior defensive rating going into the Grand Final
  • 56% of winners had a superior combined (offensive plus defensive) rating going into the Grand Final

Net VPVs, on average, rose during this period, the average absolute value 3.9 points per game. In only 10 of the Grand Finals has the absolute net VPV been below 2 points, and in six it's been sized at over a goal.

In three games, the Net VPV was enough to switch the forecast to a victory by the lower-rated team:

  • 2001 Brisbane Lions v Essendon (Brisbane Lions: Net Rating Advantage +0.5, Net VPV: -5.3)
  • 2014 Hawthorn v Sydney (Hawthorn: Net Rating Advantage -2.3, Net VPV: +8.7)
  • 2015 Hawthorn v West Coast (Hawthorn: Net Rating Advantage -3.8, Net VPV: +7.5)

For two of those three games, the switch produced a correct prediction.

(Note that we exclude the drawn Grand Final in the chart above.)

MoSHBODS' average forecast margin for this period was 12.0 points and its average forecast team score 93.1 points per team. It selected the winner in 59% of Grand Finals with an MAE of 36.6 points in the forecasted margins.

Details of MoSHBODS' forecasts appear below, and include:

  • the 2009 St Kilda team, which had the highest defensive rating of any team entering a Grand Final (+27.7). Since they lost, they also have the highest defensive rating for a losing Grand Finalist.
  • the 1994 West Coast team, which had the highest defensive rating of any winning team in a Grand Final (+25.9)
  • the 1992 West Coast team, which had the lowest offensive rating of any winning team in a Grand Final (-3.7)
  • the 2000 Essendon team, which had the highest combined rating of any team in a Grand Final (Off: +26.1, Def: +18.4, Comb: +44.5)
  • the 2008 Geelong team, which had the highest combined rating of any losing team in a Grand Final (Off: +21.0, Def: +20.0, Comb: +41.0)
  • the 2011 Grand Final, which had the highest aggregate combined ratings of any Grand Final (Geelong +42.4, Collingwood +36.1, Agg: +78.6)

The biggest upsets in this period were:

  • 2012: Sydney 91 d Hawthorn 81 (Expected Score 85-106)
  • 1997: Adelaide 125 d St Kilda 94 (Expected Score 86-101)
  • 1992: West Coast 113 d Geelong 85 (Expected Score 92-106)
  • 2004: Port Adelaide 113 d Brisbane Lions 73 (Expected Score 87-102)
  • 2008: Hawthorn 115 d Geelong 89 (Expected Score 89-102)

ALL ERAS

Lastly, let's bring all of the Grand Finalists onto some single charts, faceted by era.

In this first one we'll plot the winners in green and the losers in red (and teams that drew in blue).

(This chart, by the way, like all of them in this blog post, can be clicked on to access a larger version.)

Note that each facet has its own set of x and y axes reflecting the spread of ratings across the era.

One aspect that this chart makes especially clear is the increased preponderance of Grand Finalists with negative offensive ratings in the most recent two eras, and their relative success in the most recent era.

For the final chart we'll plot each Grand Final (excluding the drawn ones) based on the rating differences for the winning versus the losing team, colour-coding the labels to reflect the size of the final margin of victory.

Perhaps the most notable feature of this chart is the greater tendency in the most-recent era for relatively large victories to be recorded by teams that were superior offensively and inferior or only very slightly superior defensively.