Matter of Stats

View Original

Modelling the Total Score of an AFL Game

Over the eight seasons from 2006 to 2013 an average AFL game produced about 185 points with a standard deviation of around 33 points. In about one quarter of the games the two teams between them could only muster about 165 points while in another one quarter they racked up 207 points or more.

The distribution of scores has changed a little from season to season, with 2007 and 2008 being particularly high-scoring years as the empirical cumulative distribution functions (CDFs) below depict.

In today's blog I'll be attempting, for the first time here on MatterOfStats, to create a simple model to explain the total points scored in an AFL game during that period.

To build that model we need firstly to ask ourselves what characteristics of an AFL contest are likely to have a bearing on the number of points scored in it. I came up with the following list:

  • The quality of the teams - their offensive, defensive and overall capabilities, taken individually or assessed relative to one another
  • The game venue
  • The portion of the season in which the game took place
  • The weather
  • The players on each team

It's clear from the chart above that total scores have also varied from season to season. I'm going to assume that this variability was a function of differences in the characteristics just listed rather than the result of any other, season-specific factors (for example relative lenience in rule interpretation or prevailing trends in team strategies). To the extent that there are such season-specific factors, the models that I build ignoring them will be diminished in their predictive ability.

I have data for all but the last two of the characteristics on my list. Speaking of which ...

THE DATA

For this analysis I've used the data for the entirety of seasons 2006 to 2013, with the target variable the total final scores in each game, and with the following (potential) regressors:

  • Own and Opponent MARS Ratings (the MARS Rating of the teams where "Own" is the home team and "Opponent" is the away team).
  • Own and Opponent Venue Experience (the number of games played during the past 12 months at the same venue as the current game involving the team in question)
  • Own and Opponent Price (the TAB Bookmaker pre-game prices of the teams)
  • Own and Opponent Points Scored Last X (the average score during a team's last X games. Note that we allow this averaging process to span seasons and to include Finals). I included values of X from 2 to 8 both here and for the variables below
  • Own and Opponent Points Conceded Last X (as for Points Scored but based on the average points conceded in those games)
  • Own and Opponent Form Last X (the team's average change per game in MARS Rating across the last X games)
  • Round Number (the number of the round, within the current season, in which the game was played)
  • Venue (a categorical variable)
  • Interstate status (a +1/0/-1 variable reflecting the interstate nature of the clash from the point of view of the home team)

A number of these regressors were transformed in the final model:

  • Own and Opponent MARS Ratings were transformed into a single variable, Absolute MARS Difference, defined as | Own MARS - Opponent MARS |. It's a measure of the absolute difference in the quality of the teams as assessed by MARS.
  • Own and Opponent Prices were transformed into a single variable Favourite:Underdog Odds Ratio, which is defined as (Favourite Price - 1)/(Underdog Price - 1). It takes on values from near 0, when there is a very strong favourite, to exactly 1 when there are equal favourites.
  • Round Number was converted to a categorical variable roughly splitting the home-and-away season into thirds and treating the Finals as a distinct subset of games. Specifically, the categorical variable takes on values of:
    • "1st" if the game took place in Rounds 1 to 8
    • "2nd" if the game took place in Rounds 9 to 15
    • "3rd" if the game took place in Rounds 16 to the end of the Home and Away season
    • "Final" if the game was a Final

THE RESULTS

After more somewhat arbitrary iterations and refinements, I settled on a subset of regressors that produced a model that is summarised below.

Own and Opponent MARS Ratings (the MARS Rating of the teams where "Own" is the home team and "Opponent" is the away team).

Own and Opponent Venue Experience (the number of games played during the past 12 months at the same venue as the current game involving the team in question)

For the moment, ignore the coefficients on the right and focus solely on those immediately next to the variable names.

The first thing to notice is that the model explains about 14% of the variability in Total Scores across the eight seasons. To obtain an estimate of the generalised variability explained by the model (ie what R-squared we might reasonably expect post-sample) I used 1,000 replicates of 10-fold cross-validation, which gave me a figure of 11.2% with a 4.4% standard error. Assuming Normality we might then expect to explain between about 7% and 15% of the variability in total scores in two-thirds of future seasons using this model, and between about 2.5% and 20% in 95% of future seasons. (I also calculated a generalised RMSE for the model, which came in at 31.3 points per game with a 1.5 points standard error.)

Interpreting some of the model coefficients is revealing:

  • All other things being equal, the larger the difference in the Rated ability of the two teams, the higher the expected Total Score. This difference contributes to the expected Total Score in two ways
    • via the 0.09 coefficient towards the top of the table, which tells us that every 11 points of Rating Difference is worth about a point of total scoring.
    • via the 0.42 coefficient towards the bottom of the table, which is an interaction variable where we multiply the absolute Ratings difference by the Favourite:Underdog Odds Ratio. Because large Ratings differences will tend to be associated with strong favourites and hence small Favourite:Underdog Odds Ratios, and small Rating differences will tend to be associated with the opposite in terms of the Odds Ratio, this variable's range tends to be constrained. In fact, its median value is only 2.43, which means that the contribution to the Total Score from this variable is less than 0.42 x 2.43 or about 1 point half the time.
  • Historical evidence that the participating teams score and concede points at high levels is evidence for a higher expected Total Score in the current game. The largest effect comes from the average rate at which the Own (ie home) team has conceded points in its last 8 games. For every 10 points higher is the average rate at which it has conceded points, the expected Total Score rises by about 4 points.
  • Relative to the expected Total Scores for games played in Rounds 1 to 8 of a season:
    • games played in Round 9 to 15 tend to have slightly lower expected Total Scores - 2.9 points per game less, though the difference is not statistically significant
    • games played as part of the regular Home and Away season numbered 16 or higher tend to have higher expected Total Scores - almost 4 points higher and a statistically significant difference
    • games played as part of the Finals series tend to have lower expected Total Scores - about 9 points lower and also statistically significant
  • Relative to games played at the MCG, the expected Total Scores for games played at the following grounds were different to a statistically significant extent:
    • Aurora Stadium (15 points lower)
    • Cazalys Stadium (almost 50 points lower)
    • Docklands Stadium (8 points higher)
    • Football Park (11 points lower)
    • Gold Coast Stadium (9 points lower)
    • Manuka Oval (15 points lower)
  • Relative to games played at the MCG, the expected Total Scores for games played at the following popular grounds were different but not to a statistically significant extent:
    • Kardinia Park (4 points lower)
    • SCG (6 points lower)
    • Stadium Australia (7 points lower)
    • Subiaco (almost 5 points lower)

Interpreting the practical relevance of the model coefficients is aided by an understanding of the range and distributions of each regressor, for which purpose I've prepared the table that appears at right. It provides the minimum, lower quartile, median, mean, upper quartile and maximum values for all regressors across the entire 2006 to 2013 period.

I've also created empirical CDFs for the section of the season in which the game took place and for the venues at which at least 40 games were played during the period.

On the left we can see the clearly different profiles of Total Scores for games played at different points in the season. The median Total Scores, for example, are lowest during the Finals - about 10-15 points per game lower than games played in the latter portions of the home-and-away season.

On the right we can see the distinctive profiles of the CDFs for each major venue. Focussing again on the medians we see that Docklands is associated with generally higher Total Scores and that the MCG, Gabba and Kardinia Park form the next logical tier. Behind these venues are, in order, Gold Coast Stadium, Subiaco, SCG and Football Park.

QUANTILE REGRESSION

I've employed the quantile regression technique here on MatterOfStats on a few occasions, originally in this post from March of this year where I used it to create a CDF for a game's expected final margin. You can find more details about the quantile regression technique in that blog, but for now all you need to understand is that the method allows us to fit a regression surface for some value of X in the 0 to 1 range such that we can expect the target variable - here, the Total Score - to fall below that surface about X% of the time.

So, for example, if we fit a quantile regression for the 30th percentile, we're finding the coefficients for a regression surface that, when used with the regressor values for each fitted game, provide estimates of the Total Score below which the actual score should fall only 30% of the time. By fitting a number of such regressions for a range of different percentiles we can define a CDF that we can quantify for every game.

The set of nine columns of numbers in the table above provide the fitted coefficients for such a Total Score CDF for percentiles from the 10th to the 90th in increments of 10. 

In and of themselves these coefficients don't mean a lot. The interest comes when you apply them to actual games. If we do that for the 1,472 games to which the quantile regression was fitted we find that the actual Total Score came in under the fitted value for the 10th percentile 9.7% of the time, came in under the fitted value for the 20th percentile 20.0% of the time, and so on as per the row of results provided at the foot of the table. The closer a percentage is to the relevant percentile, the better is its fit - or "calibration" if you like. All nine quantile regressions shown here fit well - although this is as it should be, since the fitting process attempts to achieve exactly this result.

To gain an understanding of how these quantile regression results work in practice I used them to create CDFs for every game played in the first round of the seasons 2006 to 2013. 

The colours in this chart are used purely to allow you to follow a single CDF and have no secondary meaning. What's interesting to me about these fitted CDFs is how individually straight they are, suggesting a near linear relationship between incremental Total Score and incremental probability, and how spread out they are when viewed as a whole. The median Total Scores - which is where the CDFs cross the 50% cumulative probability line - range from a bit over 160 to about 215 points.

If we create similar CDFs but this time for games played in Round 6 (left) or Round 22 (right) of the eight seasons, we obtain the following charts.

The overall character of these charts is similar to what we saw above when we plotted the fitted results for Round 1 games. Note that the CDFs for a handful of games have gone rogue and breached the requirement that they be monotonically increasing. This is generally caused by the relative extreme nature of the regressors for the offending game - for example, a very large absolute MARS difference. If you were using these models in practice you'd need to make some adjustments to the curves for such games.

SUMMARY AND CONCLUSION

What we've found then is that we can explain about one-eighth of the variability in Total Scores using information only about the teams' relative MARS Ratings, their TAB prices, the venue at which the game is taking place, and the historical scoring behaviour of the competing teams.

It also possible to use those same regressors to perform quantile regression and to use the resulting equations to build a CDF for a particular game's Total Score.

The best test of these regression models would be to use them for games in the current season, since none of these games have been used in fitting them. In a future blog I'll do exactly that.