A Review of AFL Player Rating Data

The fine folk who brought us the fitzRoy R package have been diligently collecting historical AFL Player Rating data with a view to potentially including it in an upcoming version of the package, and asked me to take a look at what they have so far, which spans the period from 2012 to the end of 2019.

AFL PLAYING RATING PROFILES BY TEAM

Across the eight seasons, the average rating has been about 9.5 points per player per game, though the distribution of scores has been highly right-skewed.

The lowest recorded score is -8.3 points for Brandon Jack’s performance for Sydney against GWS in 2013, which included a 50% disposal efficiency (from 10 disposals) and 5 clangers. The highest is 50.5 points for Lance Franklin’s performance for Hawthorn against North Melbourne in 2012 where he kicked 13.4, gained 672 metres, had 23 disposals and 70% disposal efficiency.

Overall, teams have had fairly similar rating profiles, though quite small differences in average ratings translate into quite large differences in expected margins.

On average, an on-the-day difference of 1 Rating Point per Player is roughly worth 20 points in terms of margin, as you can see from the chart at right.

So, for example, Gold Coast’s average of 8.82 Rating Points per player per game, translates into about 4-and-a-half goals less per game compared to Hawthorn’s 10.2 Rating Points per player per game.

(I note that the data for 22 teams or 484 players is currently missing in the available fitzRoy data, though this will have a tiny effect on any of the analyses here.)

COMPARISON WITH SUPERCOACH SCORES

We’ve looked at player data here on MoS before using the SuperCoach scores, which are also available via the fitzRoy package. Firstly, there was this fairly simplistic post in October of 2018, which was followed up by this post in early 2019 where the SuperCoach scores were included in a predictive model with MoSHBODS team rating data.

Let’s start then by comparing SuperCoach and AFL Player Rating data directly.

We find a quite strong linear relationship, as we might expect, but we also find that the shared variability is only 60% of the total. In other words, 40% of the variability in AFL Player Ratings can’t be explained by variability in SuperCoach scores. Roughly speaking, 1 Rating Point is equal to about 7 SuperCoach points.

SuperCoach scores, we know, are based on Champion Data “rankings” points, but include a few secret sauce items as alluded to in the paragraph at right, which appeared in the Herald Sun.

Essentially, it seems that SuperCoach scores are mostly basic player statistic counts weighted for the context in which each of them was recorded.

I went looking for a similar explanation of the AFL Player Rating system on the AFL website, but any such information seems to have been removed in the latest upgrade of the site.

During the search I found the very helpful piece shown at right from a paper entitled Validation of the Australian Football League Player Ratings, which I’d highly recommend you read as a more erudite adjunct to this piece.

As you can see, the Player Ratings systems takes a highly contextual approach to the acts that a player commits and the contribution they make to generating points.

That said, it would be interesting to know just how much of the variability in Player Ratings can be explained by simple player- level counts of key metrics, with only minimal information about the context in which any particular player’s statistic was accumulated (eg we know if a mark or possession was contested, but that is all we know about it).

The fitzRoy data includes 48 such metrics, which in aggregate explain about 83% of the variability in player ratings, but analysis suggests we can explain over 79% with just 22 of those metrics

The table at right records what those 22 metrics are, and their average impact per instance and per game on an average player’s final rating points.

We see, for example, that every 100 metres gained adds about 1 point to a player’s rating, which, given that an average player gains just over 250 metres in the course of a game, amounts to about 2.5 points per player per game.

Other metrics that have high average contributions are effective disposals, ground ball gets, goals, one percenters, and contested possessions.

At the very bottom of the table we have the acts that are plainly detrimental to a team - clangers and turnovers - and which, appropriately, serve to reduce a player’s rating in a game.

Just above them are two acts that are associated with defensive plays, rebounds and spoils, that also have a net negative effect on a player’s rating, presumably because, on average, they reduce a team’s field equity and often lead to scores by their opponents.

There at the bottom, too, are behinds, which also, on average, reduce a player’s rating for a game. This is because, on average, they reduce a team’s equity relative to what it was before the kick was taken (when a goal was, perhaps, the more likely outcome, depending on where the kick was taken from and under what circumstances).

Overall, as you can see, the fitted model accounts for 79.4% of the total variability in AFL Player Ratings across the eight seasons.

Those same 22 metrics explain over 90% of the variability in SuperCoach scores across the same games, as we can see in the table at left.

This table also makes clear how much the SuperCoach methodology emphasises effective disposals, contested possessions, tackles, metres gained, goals and one percenters, which collectively contribute about 70 of the 75 average SuperCoach score per player per game.

A number of acts have signs opposite to what you might logically expect - for example, turnovers are positively correlated with a player’s SuperCoach score once all other metrics in the model are accounted for - which presumably arises because these acts are not explicitly included in the SuperCoach calculation and their natural pattern of correlation with the included metrics leads them to wind up carrying the coefficients that they do when they are included in the regression.

INCORPORATING PLAYER RATINGS IN A PREDICTIVE MODEL

In the second of the blog posts linked earlier, we evaluated the contribution that forecasted SuperCoach scores made to forecasts of game margins when used in combination with the pre-game team rating estimates of MoSHBODS.

I’ve repeated that analysis here but this time using forecast Player Ratings. Forecasts of these Ratings were created in the same way as they were for SuperCoach scores, and involved exponential smoothing of each player’s historical scores.

(For the technically minded amongst you, for Player Ratings data I found that an alpha of 0.04 was optimal for forming forecasts. I include, at most, a player’s last 100 games, and regularise his forecast rating towards 0 until such time as he has played a minimum of 15 games.)

The performance of four different models on an 827-game training set is shown at right.

  1. MoSH Alone Model, which includes only the difference in the team’s pre-game MoSHBODS Combined Ratings

  2. MoSH Plus SuperCoach Model, which includes the difference in the team’s pre-game MoSHBODS Combined Ratings, and the difference in the team’s pre-game forecast SuperCoach scores for the named teams

  3. MoSH Plus Player Ratings Model, which includes the difference in the team’s pre-game MoSHBODS Combined Ratings, and the difference in the team’s pre-game forecast Player Ratings for the named teams

  4. MoSH Plus SuperCoach Plus Player Ratings Model, which includes the difference in the team’s pre-game MoSHBODS Combined Ratings, the difference in the team’s pre-game forecast SuperCoach scores, and the difference in the team’s pre-game forecast Player Ratings for the named teams

For the purposes of evaluating the usefulness of Player Ratings, the most relevant comparison is between models 2 and 3, which sees model 3 superior in terms of mean absolute error (MAE) in 5 of the 8 seasons and. overall, by about 0.2 points per game.

Still, as regular readers here will know, performance on a training set is of, at best, passing interest, and can sometimes be seriously misleading about a model’s future performance.

What matters far more is performance on a test set, and the results for those same models on such a test set are shown at left.

Again the model including Player Rating differentials outperforms that including SuperCoach score differentials in 5 of the 8 seasons, and the overall superiority comes in at 0.15 points per game.

Interestingly, overall, a model with only MoSHBODS and Player Ratings differentials does as well as a model with MoSHBODS, Player Ratings and SuperCoach score differentials. In that sense, the SuperCoach score differentials add nothing to the model.

SUMMARY

AFL Player Ratings data would clearly be a positive inclusion in any future version of the fitzRoy package, and regular updates of the latest Ratings especially valuable.

These Ratings clearly provide predictive value over and above that which can be extracted from raw player statistics or from SuperCoach scores.

NEXT STEPS

In a future blog I’m hoping to investigate the relationships between AFL Player Ratings, player statistics and player position.