Points Scoring Superiority and Victory Chances in VFL/AFL Football

In a previous post I defined eras (ie non-overlapping sets of consecutive seasons) in VFL/AFL history based on particular aspects of the scoring in each season: the average points per game, the average winning score, the average losing score, the average victory margin, and the overall scoring shot conversion rate.

I identified those eras using the R changepoint package, which provides a statistical technique for finding them based on the penalised maximisation of a log likelihood function (see pages 2 and 3 of this PDF).

For today's post I'm also going to find eras by employing that same package, this time using as the metric for each season the fitted optimal Pythagorean Expectation Exponent. I wrote about the notion of Pythagorean Expectation and its application to VFL/AFL (and NRL) in a post earlier this year. Briefly, the Pythagorean approach fits a model to the set of all teams' winning percentages across an entire home-and-away season as a function of their scoring performances.

The model, despite including only a single parameter, k, is remarkably flexible and year-after-year has, post hoc, explained 70% or more - sometimes much more - of the variability in team winning percentages solely as a function of the ratio of their points conceded to points scored across the entire season.

For one interpretation of the Pythagorean model, the equation for which is shown at right, we conceptualise it as making a probabilistic statement about the likelihood of a team winning given its expected scoring superiority in a game, with the value of k quantifying the precise relationship between those two entities. Under this interpretation we can talk of higher values of the exponent k decreasing the victory probability of a team for any given expected ratio of points conceded to points scored greater than 1, and increasing the victory probability of a team for any expected ratio of points conceded to points scored lesser than 1. Lower values of the exponent have the opposite effect. (A team with an expected points conceded to points scored ratio of 1 always has a 50% victory probability, regardless of the value of k.)

To see this relationship between k and victory probability, consider first a team expected to concede only 80% of the points that it scores (ie a superior points-scoring team) playing a team expected (by definition) to concede 125%, or 1/0.8, of the points that it scores. In a season with an Exponent of 3 the superior team would be expected to win about 66% of the time, while in a season with an Exponent of 4 that team would instead be expected to win about 71% of the time. Were the Exponent instead 2, the superior team would be expected to win only about 61% of the time. We'll come to a discussion of why Exponents might vary from season-to-season and era-to-era at the end of this blog.

For now though, in summary we can say that higher values of k are better for superior teams in the sense that they enjoy a higher victory probability for a given level of points-scoring superiority and that lower values of k are, conversely, worse for superior teams. Given that, it seems reasonable to consider the fitted value of k for a season as describing something useful about that season and as therefore being a legitimate basis on which to define eras.

As I discussed in the previous post, the process of defining eras is part science, part art, and here it turned out that assuming the k's came from an Exponential rather than a Normal distribution produced what were, to me, more acceptable results.

Making this assumption and adopting a penalty value of 0.05, so the call was

cpt.meanvar( ts(results_AFL$Exponent,start = 1897), penalty="Manual", pen.value="0.05", method="PELT", test.stat="Exponential", class=TRUE, param.estimates=TRUE),

yielded a solution as charted at right.

Using that configuration we end up with 12 eras, half of which are completed prior to the end of WWII:

  1. 1897-1903
  2. 1904-1909
  3. 1910-1916
  4. 1917-1919
  5. 1920-1932
  6. 1933-1941
  7. 1942-1971
  8. 1972-1982
  9. 1983-1993
  10. 1994-1997
  11. 1998-2009
  12. 2010-2013

(note that we use only the data for full seasons and so don't include the 2014 season in this analysis).

Whilst this solution does posit a number of quite short eras - six of the eras are shorter than 10 years - that interpretation appears to be supported by the data. In many cases, the difference in the average exponent between eras is quite substantial.

It's interesting to note that two of the three most-recent eras, all of which have come after the arrival of the modern draft, have been associated with values of k lower than many of the eras that have gone before. This implies that a given level of point-scoring superiority has translated into lower victory probabilities than has been the case historically. Put another way, a team expected to score say 10% more points than its opponent is less likely to win in the current era than they would have been in the previous era.

To be clear though, we're only talking about quite small differences in probability, because the change in the average Exponent for the successive eras is only from about 4.2 to 3.6. For those values, the difference in the victory probability of a team expected to score 10% more points than its opponent is only about 1.4% points - from 59.9% to 58.5%.

Somewhat more significant are the differences between, say, the current era with its average Exponent of 3.6, and the era from 1972 to 1982 with its average Exponent of 5.0. In that earlier era, a team expected to score 10% more points than its opponent would enjoy, according to the Pythagorean model, a victory probability of 61.7%, which is over 3% points higher than it would enjoy in the current era.

WHY MIGHT PYTHAGOREAN EXPONENTS VARY FROM ONE SEASON OR ERA TO THE NEXT?

As noted earlier, the Pythagorean Exponent, k, for a given season quantifies the trade-off between a team's points-scoring superiority as measured by its expected points conceded to points scored, and its victory probability.

I can think of two reasons why a team with a given level of expected points-scoring superiority might fare differently in different seasons.

  1. Because the average level of scoring in the season was different. This would mean that a given expected ratio of points conceded to points scored would translate into a different expected margin of victory. For example, a team with an expected ratio of 0.8 playing in a season where the average number of points scored in a game was 200 would be expected to win by about 111 to 89 (ie 22 points). In a season where the average total score was 175 points, a team with that same expected points-scoring ratio would be expected to win by about 97 to 78 (ie 19 points). We know - only too well - that there's a random element to every game of football, so a team expected to win by 19 points will logically carry a lower victory probability than a team expected to win by 22 points, all other things being equal.
  2. Because the variability of scores around their expected values was different. Higher levels of random variability in scoring make any expected victory margin less "safe" because there's a greater likelihood that the random variability will swamp the expected points-scoring superiority. Consider again the team in point 1 above which is expected to win by 22 points. In a season where actual victory margins behaved like Normally distributed random variables with a mean equal to their expected value and a standard deviation of 36 points, that team's victory probability would be about 73%. In a season where the standard deviation was, instead, 30 points, the victory probability would be 77%. 

If we think about the Pythagorean Exponent in the sense of that second reason, we might consider it an index of the potential influence of luck in a given season, with lower Exponents (ie greater random variability around expectations) implying a greater potential influence of luck, and higher Exponents implying the opposite.

Finally, I'll note that viewing the Pythagorean Exponent as an indicator of the level of variability of scoring about expectations and finding different exponents for different seasons, suggests we should find heteroskedasticity in the predictive errors for models across those seasons, which is a topic I've been mulling over for some time now. I'll leave further discussion of this though to a future blog.