Updating the MoS Twins for the 2021 Season

Perhaps naively in retrospect, I had hoped that the changes I made to the MoS twin algorithms in preparation for the 2020 season would be the last I’d need to make for a while. But, the unusual nature of 2020 fixturing highlighted some characteristics of both Systems that I thought needed redressing, so 2021 will see new versions of MoSSBODS and MoSHBODS providing key forecasts.

In this blog I’ll take you through those changes and the rationale behind them, and provide a broad outline about how both Systems work.

(Note that all MoSSBODS and MoSHBODS calculations that require the use of 2020 data multiply team scores and scoring shot counts by 1.25 to adjust for the shorter quarters)

THE ISSUES

MoSSBODS and MoSHBODS, you might recall, both use an extended version of Home Ground Advantage in that they calculate a Venue Performance Value (VPV) for every team at every ground, whether that venue serves as a home ground or not. These VPVs for a team are calculated based on that team’s historical performance at a venue within some window of time, and take as input the team’s actual performance relative to their expected performance adjusting solely for their own and their opponents’ underlying ability. So, if a team has consistently had larger winning margins at some venue than we would have expected given their own and their opponents’ estimated ability (as reflected in their underlying Ratings), we’d attribute that overachievement to “venue effects”.

As a general principle, this seems reasonable, but the challenge comes in estimating VPVs for venues at which a team rarely plays.

The MoS twins’ solution to that problem in 2020 was to assume some fixed VPV for a venue until such time as the team had played at least some threshold number of games there in the time window, at which point we’d switch to an estimated VPV calculated using only the relevant recent results at the venue.

Being more specific:

  • MoSSBODS would assume a VPV of zero for any venue in the team’s home State and a VPV of -3.5 Scoring Shots for any venue outside the team’s home State until a team had played 11 games in the past 6 years at the venue in question

  • MoSHBODS would assume a VPV of zero for any venue in the team’s home State and a VPV of -9 Points for any venue outside the team’s home State until a team had played 8 games in the past 6 years at the venue in question

These rules meant that a team’s VPV would often jump considerably once the team met the threshold of games needed to move from using the default to using the calculated value. There could also still be quite large movements in VPVs even after a team met the threshold, because the number of games being averaged was still quite small, though some dampening was achieved by taking only a fraction of the team’s average over- or under-performance at the venue.

Such variability was exacerbated in Finals because both Systems applied a 1.5 multiplier to underlying VPVs for all such games. This meant, for example, that Geelong’s MoSHBODS VPV at the Gabba went from -9 in the four home-and-away games to -13.5 in the Semi Final, and then to +13.2 in the Preliminary Final and +14.7 in the Grand Final.

THE MOSSBODS ALGORITHM

Before getting into the changes, a quick refresher on the basics of MoSSBODS.

MoSSBODS’ uses teams’ Scoring Shots and is an Elo-style team rating system designed to produce an offensive and a defensive rating for each team, where a rating of zero means that a team is expected to generate (if we’re considering offensive rating) or concede (if we’re considering defensive rating) an “average” number of Scoring Shots when facing a team rated 0 on both offence and defence at a neutral venue.

In other words, a team rated 0 on offence and on defence is an average team when playing on a neutral venue in the context of the current season and the mix of abilities of the teams.

Underpinning MoSSBODS is a set of equations that are used to update teams’ offensive and defensive ratings based on the most recent results

  1. New Defensive Rating = Old Defensive Rating + k x (Actual Defensive Performance – Expected Defensive Performance)

  2. Actual Defensive Performance = Expected Scoring Shots for Average Team – Adjusted Opponent’s Scoring Shots

  3. Expected Defensive Performance = Own Defensive Rating – Opponent’s Offensive Rating + Venue Adjustment / 2

  4. Adjusted Opponent’s Scoring Shots = min(Scoring Shot Cap, Actual Opponent’s Scoring Shots)

  5. New Offensive Rating = Old Offensive Rating + k x (Actual Offensive Performance – Expected Offensive Performance)

  6. Actual Offensive Performance = Adjusted Own Scoring Shots - Expected Scoring Shots for Average Team

  7. Expected Offensive Performance = Own Offensive Rating – Opponent’s Defensive Rating + Venue Adjustment / 2

  8. Adjusted Own Scoring Shots = min(Scoring Shot Cap, Actual Own Scoring Shots)

  9. Venue Adjustment = Nett VPV Difference after adjustments

OPTIMISED MoSSBODS PARAMETERS

  • All teams start in Round 1 of season 1897 with Offensive and Defensive Ratings of 0. Teams that started in later seasons start with offensive and defensive ratings of 0 for their first game.

  • The value of k in (1) and (5) above varies according to the round in which the game is played, as follows:

    • k = 0.12 for Rounds 1 to 6 (in 2021 - in general, this first split is meant to take in about 30% of the home and away rounds, the second and third splits another 30% each, and the fourth split the remaining 10%)

    • k = 0.09 for Rounds 7 to 13

    • k = 0.09 for Rounds 14 to 20

    • k = 0.085 for Rounds 21 to the end of the home and away season

    • k = 0.085 for Finals

  • Having relatively larger k values in early rounds allows the ratings to adjust more rapidly to the true abilities of teams in the new season.

  • The Days to Average parameter determines the window across which a number of key values are calculated. One of those is the value for Expected Scoring Shots, which is used in (2) and (6) above and is calculated using the actual average Scoring Shots per team per game across the previous 6 years. This is the same “window” as was used in the previous version on MoSSBODS.

  • Venue Performance Values (VPVs) are, as noted earlier, calculated for each team pre-game based on some default value and their performance relative to expectations (ignoring venue effects) at that same venue across the previous 6 years. The calculation will now include up to the most recent 30 games. Where a team has played fewer than 30 games at a venue in the window, the VPV will be calculated as a weighted average of the default value (0 if the venue is in the team’s home State, -3 otherwise) and the regularised average under- or over-performance at the venue. The Mean Regularisation fraction to be used in 2021 is 0.33.

  • These VPVs are subject to a couple of adjustments in Finals.

    • In Finals other than Grand Finals, teams playing in their home State will use the standard calculated VPV for that venue, but teams playing out of State will use a VPV twice the calculated VPV. Since most teams will have a negative VPV for venues outside their home state, this will tend to increase the estimated handicap they face when playing out of State.

    • In Grand Finals, teams playing in their home State will use 0.95 times the standard calculated VPV for that venue, but teams playing out of State will use a VPV half the calculated VPV. The overall effect of this adjustment will usually be to reduce the absolute size of venue effects in Grand Finals.

  • Optimisation suggests that no Cap is required on teams’ actual Scoring Shot data. Consequently, in (4) and (8) above, no Cap is imposed.

  • Teams carry 70% of their Rating from the end of one season to the start of their next season. So, for example, a team finishing a season with a +3 Defensive Rating will start their next season with a +2.1 Defensive Rating. 

  • Sydney, in its first year, is considered to be a continuation of South Melbourne, the Western Bulldogs a continuation of Footscray, and the Brisbane Lions, Fitzroy and the Brisbane Bears are treated as three separate teams. The Kangaroos and North Melbourne are also treated as the same team regardless of the name used in any particular season.

  • The sum of all active teams’ offensive and defensive ratings during the course of a season will be zero. Where a team drops out of the competition, temporarily or permanently, a fixed adjustment is made to all of the offensive and defensive ratings of the remaining teams at the start of the season to ensure that the sum again becomes zero.

  • For those teams that missed entire seasons - for example, Geelong in 1916, 1942 and 1943 - they re-enter the competition with the same ratings as they had when they exited (adjusted for the season-to-season carryover).

EXPECTED TEAM SCORES AND BIAS CORRECTION

Once we have the pre-game offensive and defensive ratings for both teams, and their VPVs for the venue, we can calculate the expected number of scoring shots generated and conceded by a team by calculating:

Expected Scoring Shots Generated = Expected Scoring Shots for Average Team + Own Offensive Rating - Opponent Defensive Rating + (Own VPV - Opponent VPV)/2

Expected Scoring Shots Conceded = Expected Scoring Shots for Average Team + Opponent Offensive Rating - Own Defensive Rating + (Opponent VPV - Own VPV)/2

(note that we split the VPV difference 50:50 across offence and defence)

These Scoring Shot calculations are converted to Scores by multiplying them by the average Score per Scoring Shot across the past 6 years.

The new MoSSBODS makes one final adjustment in coming up with estimated team scores, and that is to make a bias adjustment. For this purpose, an average bias over the past six years is calculated (ie an average of actual less expected scores), separately for all designated home teams and for all designated away teams, and including only home and away games.

That average, all-team bias is then added to the expected scores generated earlier if the game is a home and away contest. No adjustment is made if it is a Final.

So, in summary,

Adjusted Expected Score = (Expected Scoring Shots x 6-year Average All-Team Score per Scoring Shot) + 6-year Average All-Team Bias

The expected total score for a game is now just the sum of the Adjusted Expected Scores for the two teams.

MAJOR MoSSBODS DIFFERENCES

The major differences in philosophy between MoSS2021 and MoSSBODS 2020 are:

  • The inclusion of more games in the VPV calculation, and the additional regularisation in cases where fewer than the threshold number of games have been played. The overall effect of these changes is to keep VPVs for venues in a team’s home State closer to 0, and to keep VPVs for venues outside a team’s home State closer to -3 Scoring Shots

  • Including new VPV adjustments for Finals

  • (We’ve also removed the arbitrary 0.75 Scoring Shot “bonus” that MoSSBODS 2020 applied to all designated Home teams in the Home-and-Away rounds)

These changes produce the following changes in the mean absolute error performance of MoSSBODS 2021 compared to MoSSBODS 2020:

  • All-time, all-games margin forecasts: -0.070 points per game

  • All-time, Finals margin forecasts: -0.389 points per game

  • 2000-2020, all-games margin forecasts: -0.125 points per game

  • 2000-2020, Finals margin forecasts: -0.666 points per game

THE MOSHBODS ALGORITHM

Let’s start here too with a quick refresher on the basics of MoSHBODS.

MoSHBODS is also an Elo-style team rating system that produces an offensive and a defensive rating for each team, but it has been designed so that these ratings are measured in Points rather than in Scoring Shots. So, a rating of zero means that a team is expected to generate (if we’re considering offensive rating) or concede (if we’re considering defensive rating) an “average” number of points when facing a team rated 0 on both offence and defence at a neutral venue.

In other words, like MoSSBODS, a team rated 0 on offence and on defence is an average team when playing on a neutral venue in the context of the current season and the mix of abilities of the teams.

Now the rationale for MoSSBODS’ using a team's scoring shots rather than its score in determining ratings is the fact that a team's accuracy or conversion rate - the proportion of its scoring shots that it converts into goals - appears to be largely random, in which case rewarding above-average conversion or punishing below-average conversion would be problematic. Conversion is not, however, completely random, since, as the blog post just linked reveals, teams with higher offensive ratings, and teams facing opponents with lower defensive ratings, tend to be marginally more accurate than the average team. 

So, if better teams tend to be even slightly more accurate, maybe higher accuracy should be given some weighting in the estimation of team ratings. That was the original motivation for MoSHBODS, which uses a combination of Scoring Shots and Points in its underlying equations.

  1. New Defensive Rating = Old Defensive Rating + k x (Actual Defensive Performance – Expected Defensive Performance)

  2. Actual Defensive Performance = Expected Score for Average Team – Adjusted Opponent’s Score

  3. Expected Defensive Performance = Own Defensive Rating – Opponent’s Offensive Rating + Venue Adjustment / 2

  4. Adjusted Opponent’s Score = f x Opponent's Score if Converted at All-Team Average + (1-f) x Actual Opponent's Score 

  5. New Offensive Rating = Old Offensive Rating + k x (Actual Offensive Performance – Expected Offensive Performance)

  6. Actual Offensive Performance = Adjusted Own Score - Expected Score for Average Team

  7. Expected Offensive Performance = Own Offensive Rating – Opponent’s Defensive Rating + Venue Adjustment / 2

  8. Adjusted Own Score = f x Own Score if Converted at All-Team Average + (1-f) x Actual Own Score 

  9. Venue Adjustment = Nett VPV Difference after adjustments

The parameters for MoSHBODS have been optimised across the same subset of games from the 1990 to 2019 period.

OPTIMISED MoSHBODS PARAMETERS

  • All teams started in Round 1 of season 1897 with Offensive and Defensive Ratings of 0. Teams that started later in the period start with a Rating of 0 for their first game.

  • The value of k in (1) and (5) above varies according to the round in which the game is played, as follows:

    • k = 0.12 for Rounds 1 to 6

    • k = 0.08 for Rounds 7 to 13

    • k = 0.09 for Rounds 14 to 20

    • k = 0.08 for Rounds 21 to the end of the home and away season

    • k = 0.06 for Finals

  • MoSHBODS and MoSSBODS use identical splits for all seasons, and the values for k are broadly similar.

  • MoSH2020 also uses a Days to Average parameter to determine the window across which a number of key values are calculated. And, like MoSS2020, the optimal value turns out to be 6 years. One of those is the value for Expected Score, which is used in (2) and (6) above and is calculated using the actual average Score per team per game across the previous 6 years.

  • To convert actual to adjusted scores, MoSH2020 uses a mixture of a team’s actual score, and what it would have scored had it converted at the same average rate as teams from the past 6 years. It takes 25% of the actual score and 75% of the score that would have been achieved if all Scoring Shots were converted at that long-term average.

  • Venue Performance Values (VPVs) are calculated analogously to MoSSBODS, and also considers up to the most recent 30 games. Its defaults are 0 for a team playing in its home State and -9 for a team playing outside its home State. The Mean Regularisation value it uses is 0.32.

  • The VPV adjustments for Finals are

    • In Finals other than Grand Finals, teams playing in their home State will use the standard calculated VPV for that venue, but teams playing out of State will use a VPV twice the calculated VPV. Since most teams will have a negative VPV for venues outside their home state, this will tend to increase the estimated handicap they face when playing out of State.

    • In Grand Finals, teams playing in their home State will use 0.25 times the standard calculated VPV for that venue, and teams playing out of State will also use a VPV 0.25 the calculated VPV. The overall effect of this adjustment will usually be to reduce the absolute size of venue effects in Grand Finals.  

  • Optimisation suggests that no Cap is required on teams’ actual Scoring Shot data. Consequently, in (4) and (8) above, no Cap is imposed.

  • Teams carry 65% of their Rating from the end of one season to the start of their next season so, for example, a team finishing a season with a +10 Defensive Rating will start their next season with a +6.5 Defensive Rating. 

  • MoSHBODS, like MoSSBODS, treats Sydney, in its first year, as a continuation of South Melbourne, the Western Bulldogs as a continuation of Footscray, and treats the Brisbane Lions, Fitzroy and the Brisbane Bears as three separate teams. The Kangaroos and North Melbourne are also treated as the same team regardless of the name used in any particular season.

  • The sum of all active teams’ offensive and defensive MoSHBODS ratings during the course of a season are, like MoSSBODS, also zero. Similar adjustments are made at the start of each season to ensure this is the case for all active teams.

  • MoSHBODS also treats the entrance and exit of teams in the same way as MoSSBODS.

EXPECTED TEAM SCORES AND BIAS CORRECTION

Once we have the pre-game offensive and defensive ratings for both teams, and their VPVs for the venue, we can calculate a team’s expected score for and against as follows:

Expected Score For = Expected Score for Average Team + Own Offensive Rating - Opponent Defensive Rating + (Own VPV - Opponent VPV)/2

Expected Score Against = Expected Score for Average Team + Opponent Offensive Rating - Own Defensive Rating + (Opponent VPV - Own VPV)/2

(note that we here too split the VPV difference 50:50 across offence and defence)

MoSHBODS, like MoSSBODS, makes one final adjustment in coming up with estimated team scores, and that is to make a bias adjustment. For this purpose, an average bias over the past six years is calculated (an average of actual less expected scores), separately for all designated home teams and for all designated away teams, and including only home and away games.

That average, all-team bias is then added to the expected scores generated earlier if the game is a home and away contest. No adjustment is made if it is a Final.

So, in summary,

Adjusted Expected Score = Expected Score + 6-year Average All-Team Bias

The expected total score for a game is, as per MoSS2020, just the sum of the Adjusted Expected Scores for the two teams.

MAJOR MoSHBODS DIFFERENCES

The major differences in philosophy between MoSH2021 and MoSHBODS 2020 are:

  • The inclusion of more games in the VPV calculation, and the additional regularisation in cases where fewer than the threshold number of games have been played. The overall effect of these changes is to keep VPVs for venues in a team’s home State closer to 0, and to keep VPVs for venues outside a team’s home State closer to -9 Points

  • Including new VPV adjustments for Finals

These changes produce the following changes in the mean absolute error performance of MoSHBODS 2021 compared to MoSHBODS 2020:

  • All-time, all-games margin forecasts: -0.059 points per game

  • All-time, Finals margin forecasts: -0.360 points per game

  • 2000-2020, all-games margin forecasts: -0.121 points per game

  • 2000-2020, Finals margin forecasts: -0.487 points per game