Does an Extra Day's Rest Matter in the Home and Away Season?

Whenever the draw for a new season is revealed there's much discussion about the teams that face one another only once, about which teams need to travel interstate more than others, and about which teams are asked to play successive games with fewer days rest. There is in the discussion an implicit assumption that more days rest is better than fewer days rest but, to my knowledge, this is never supported by empirical analysis. It is, like much of the discussion about football, considered axiomatic. In this blog we'll assess how reasonable that assumption is.
Read More

Defensive and Offensive Abilities : Do They Persist Across Seasons?

In the previous blog we reviewed the relationship between teams' winning percentages in one season and their winning percentages in subsequent seasons. We found that the relationship was moderate to strong from one season to the next and then tapered off fairly quickly over the course of the next couple of seasons so that, by the time a season was three years distant, it told us relatively little about a team's likely winning percentage. There is, of course, an inextricable link between winning and scoring, and in this blog we'll investigate the temporal relationships in teams' scoring in much the same way as we investigated the temporal relationships in teams' winning in that previous blog.
Read More

What Do Seasons Past Tell Us About Seasons Present?

I've looked before at the consistency in the winning records of teams across seasons but I've not previously reported the results in any great detail. For today's blog I've stitched together the end of season home-and-away ladders for every year from 1897 to 2012, which has allowed me to create a complete time series of the performances for every team that's ever played.
Read More

How Many Quarters Will the Home Team Win?

In this last of a series of posts on creating estimates for teams' chances of winning portions of an AFL game I'll be comparing a statistical model of the Home Team's probability of winning 0, 1, 2, 3 or all 4 quarters with the heuristically-derived model used in the most-recent post.
Read More

How Many Quarters Will the Favourite Win?

Over the past few blogs I've been investigating the relationship between the result of each quarter of an AFL game and the pre-game head-to-head prices set for that same game. In the most recent blog I came up with an equation that allows us to estimate the probability that a team will win a quarter (p) using as input only that team's pre-game Implicit Victory Probability (V), which we can derive from the pre-game head-to-head prices as the ratio of the team's opponent's price divided by the sum of the two teams' prices.
Read More

Deriving the Relationship Between Quarter-by-Quarter and Game Victory Probabilities

In an earlier blog we estimated empirical relationships between Home Teams' success rate in each Quarter of the game and their Implicit Probability of Victory, as reflected in the TAB Bookmaker's pre-game prices. It turned out that this relationship appeared to be quite similar for all four Quarters, with the possible exception of the 3rd. We also showed that there was a near one-to-one relationship between the Home Team's Implicit Probability and its actual Victory Probability - in other words, that the TAB Bookmaker's forecasts were well-calibrated. Together, these results imply an empirical relationship between the Home Team's likelihood of winning a Quarter and its likelihood of winning an entire Game. In this blog I'm going to draw on a little probability theory to see if I can derive that relationship theoretically, largely from first principles.
Read More

The Changing Nature of Home Team Probability

The original motivation for this blog was to provide additional context for the previous blog on victory probabilities for portions of games. That blog looked at the relationship between the TAB Bookmaker's pre-game assessment of the Home team's chances and the subsequent success or otherwise of the Home team in portions - Quarters, Halfs and so on - of the game under review.
Read More

In-Running Models: Confidence Intervals for Probability Estimates

In a previous blog on the in-running models I generated point estimates for the Home team's victory probability at different stages in the game under a variety of different lead scenarios. In this blog I'll review the level of confidence we should have in some of those forecasts. More formally, I'll generate 95% confidence intervals for some of those point forecasts.
Read More

In-Game Momentum : Score-by-Score Analysis

So far, in the quest to find evidence for momentum in various guises, I've looked at: 

  • Something that I called "game cadence" in a post back in 2009 in which I found evidence that the team that won one quarter was less likely to win the next quarter if we considered the entire history of VFL/AFL but more likely to win the next quarter if we narrowed our focus to the period from 1980 onwards. Note that this analysis does not attempt to account for differences in team strength.
  • The win-loss progression for each team in another post, this one from 2010 in which I found that many teams were more likely to win a game having won their previous game than their long-term winning rate would suggest and that, similarly, many teams were more likely to lose a game having lost at their previous outing than their long-term losing rate would suggest. This analysis spanned 10 seasons, so it's conceivable that teams' base winning rates might have changed during that period. As such, some of the apparent momentum in successive team results might be attributed to such changes in underlying ability rather than to the short-term effects of the previous week's result. (I updated and expanded on this analysis a little in a subsequent post.)
  • The extent to which the final margin of victory for the Home team can be predicted using, along with its leads at the end of each quarter, the change in these leads across quarters. In this formulation, momentum could be said to exist if the Home team's victory margin depended on the rate of change of its lead, not just its actual lead. I investigated this approach in this post from early 2012, finding that the size of any such momentum effect was small.
  • Whether the pattern of team scoring in successive quarters suggested that momentum existed in the sense that a team outscoring its opponent in one quarter was more likely to outscore them again in the next. This angle was explored in a post from late 2012. In an attempt to control for the fact that successive quarters of outscoring might be due to underlying team superiority rather than to short-term momentum effects, I looked only at games where each side had outscored its opponents in at least one quarter of the game. I found some evidence for momentum, especially in the 4th quarter for teams that had been outscored in the 1st quarter but that had then gone on to outscore their opponents in the 2nd and 3rd quarters. But, as I noted there, this might instead be evidence only for the existence of games where the stronger team started slowly and then found its rhythm, rather than for the existence of momentum.
  • In this post, also from late 2012, the surprising lumpiness of randomness and how this could easily lead a spectator to conclude that scoring ran in streaks when, in fact, the observed scoring was completely consistent with teams scoring at random based on an underlying, constant probability of being the next scorer. 

In-Game Momentum - Who Scores What, Next?

What's been missing so far is an empirical search for momentum at the level of the next team to score in a game. Such an analysis requires access to game scoring sequences - which team scored next, when and whether it was a goal or a behind - no readily accessible source of which I'd discovered until recently when I came across the "Scoring Progression" section on the scorecards for each of the games at afltables site. Here, for example, is the information for the first game of season 2012.

The Data

For this current analysis I manually cut-and-pasted scoring progression data from the site for 100 randomly-selected games from the home-and-away season of 2012.

I used Excel's RAND() function to choose the games to include and, as if to gently or mockingly remind me of the lumpiness of random selections, Excel offered up a sample that included only 1 of Hawthorn's home games, but 8 of Port's home fixtures and 8 more of the Dogs' road trips. Unless you think that games involving particular teams are more or less likely to exhibit momentum then the team composition of the random sample is, however, no more than an ironic curiosity.

Excel treated the 23 rounds of the home-and-away season in a slightly more egalitarian manner, selecting a minimum of 2 and a maximum of 7 games from any single round.

Profiling the sample by day of the week we find 54 Saturday, 31 Sunday, 10 Friday, 2 Thursday, 2 Monday and 1 Wednesday game, which seems about right.

I will at some point revisit the ground I cover in this blog if I find a way to access a larger sample of games more efficiently, but for now the 100 chosen games will suffice.

The statistical metric I'll be employing in this blog in the hunt for signs of momentum is "runs" or sequences. If the sequence of scoring in a game was Sydney - Hawthorn - Hawthorn - Sydney - Sydney, that sequence would be said to contain 3 scoring runs: a run of length 1 for Sydney, followed by a run of length 2 for Hawthorn, and then a run of length 2 for Sydney.

Here's the runs data for an actual game, which might give you a feel for the range of numbers that we're likely to encounter. (Please click on the image to access a larger, readable version of it.) Note that I allow runs to span quarters, so a team that scores last in one quarter and first in the next is assessed as having preserved the streak. 

In this game there were 17 scoring runs, 8 for Fremantle and 9 for Richmond, which spanned the game's 46 scoring shots. This, it turns out, is about 5.4 fewer runs than we'd expect, making this a game providing strong evidence for team momentum. (The runs variable has been shown to be asymptotically distributed as a Normal with a mean of (2 x Number of Scoring Shots by Team A x Number of Scoring Shots by Team B) / Total Number of Scoring Shots + 1 and a variance that you can find in the Wikipedia page just linked. Monte Carlo simulation I've performed for realistic scoring shot data shows that the Normal approximation of the mean is very good for the range of values we're likely to encounter.)

Knowing the statistical distribution of the runs statistic allows us to perform standard hypothesis testing of the number of runs observed for each game in the sample, which I'll come to in a moment. 

If momentum effects are evident in the scoring sequence of games such that the team that scored last is more likely to score next then we'd expect to find fewer, longer runs of scoring than would be the case if no such momentum existed. That means we want to test if the observed number of runs is in the left-hand tail of the distribution. Alternatively, we might postulate that teams tend to respond to being scored against by lifting their effort and, in so doing, become more likely to score next. This would lead to fewer scoring runs than a random sequence would produce. To test this hypothesis we need to determine if the runs statistic is too far into the right-hand tail of the distribution.

Statistically Testing Whether There's Momentum in the Scoring Progression

Formally, the statistical test I'm using is the exact runs test as implemented in the pruns.exact function in the randomizeBE package of R. It calculates the actual distribution of the runs statistic under the null hypothesis of random scoring rather than relying on the Normal approximation discussed above, but the principle is the same. The test requires three inputs: the number of runs observed and the number of scoring shots registered by each team. In essence what we're asking is the following:

Given that Team A registered X scoring shots during the game and Team B registered Y scoring shots, if those scoring shots were organised at random how likely is it that we would have observed as many or more (or as few or less) as the R runs of scoring shots that we actually observed?

Each of the 100 chosen games has its own values of X, Y and R which can be input into the runs test to calculate the probability that we would have observed a number of runs at least as extreme as we did under the "null hypothesis" that the scoring took place at random (subject to the fixed number of scoring shots for each team). The following table records the p-values so obtained for each of the 100 games.

The numbers on the left relate to the p-values for how likely it was that we would observe a number of runs equal to or less than the number that we actually observed given the null hypothesis of random scoring, and the numbers on the right relate to the p-values for how likely it was that we would observe a number of runs equal to or greater than the number that we actually observed given the null hypothesis of random scoring.

What this table suggests is that, if there is momentum in AFL scoring patterns, it has only a very subtle influence. For starters, we have only 12 games that provide evidence against the null hypothesis at the 10% level, which is only 2 more games than we'd expect to find with p-values in this range due to chance. Even if we look at the number of games delivering a p-value under 50% we've only an excess of 8 games relative to chance.

In one, quite technical way, the runs test makes it hard to detect momentum because the observed number of runs is a discrete rather than a continuous statistic and therefore carries non-zero probability. (I expect that this would be less of an issue if we had a larger sample, but that's to be determined on another day.) One practical consequence of this is a complication in determining statistical significance. If, for example, under the null hypothesis, only 3% of runs values are less than the value we observed, but 88% are greater - because the exact number of runs we observed has a 9% probability under the null hypothesis - is this result statistically significant at the 10% level or not? The p-value for such a game is 12% and so would be recorded in the table above in the 10-20% bucket. Generally, the discrete characteristic of the runs statistic will tend to push the p-values into higher buckets.

Putting that to one side for a moment, there is a formal test that we can use on the set of p-values that we've observed to ask if they, as a group, support or impugn the null hypothesis. It's the Fisher Test, which is described here, and which uses the statistic -2 x sum of the natural logs of the p-values that is distributed under the null hypothesis as a chi-squared variable with 2k degrees of freedom where the number of independent p-values you have is k. In our case, for the p-values on the left-hand side of the table, the statistic is 211.9, which itself has a p-value of 27%. Not even the most null-hypothesis loathing researcher uses an alpha of 30% for his or her hypothesis testing.

We can rescue the possibility of scoring momentum somewhat by looking instead at the proportion of p-values that are less than 50%, treating this statistic as the outcome of a binomial process with constant probability 0.5, and determining whether we have a statistically significantly under- or over-representation of such p-values noting that, under the null hypothesis, we'd expect half of the p-values to be under 50% and half over 50%. With 58 of the 100 observed p-values coming in under 50% we get a p-value for this binomial test of 7%.

Finally, we can lend another sliver of support to the idea of momentum - in a slightly roundabout manner - by performing similar calculations with the p-values from the righthand side of the table above, which are p-values where the alternative hypothesis is that we've witnessed too many runs. The Fisher statistic for these data yield a p-value of 100% and the binomial on the number of p-values less than 50% is 99.9%, both of which are so supportive of the null hypothesis as to imply that we've maybe "chosen the wrong tail" to look at. We should note, however, that the same effect which tends to push the p-values higher for the left-tail test also pushes the p-values higher for the right-tail test, because in both cases we're including the probability associated with the actual observed number of runs in the p-value.

The Verdict on Scoring Momentum for Teams

In short, the evidence is that team scoring streaks are about what we'd expect them to be if momentum did not exist, though there might be some traces of momentum in a handful of games. 

Perhaps the best way to put all of this complex statistical analysis in perspective is to look at the effect size of the phenomenon we're dealing with here and to note that the average difference across the 100 games in the sample between the observed and the expected number of runs under the null hypothesis is just 0.7 runs per game. When you consider that the average game has just over 24 scoring runs, that's a tiny if-at-all-existent difference.

What About Momentum in Scoring Type?

We can also ask of the scoring progression data whether or not there's evidence that goals tend to be followed by goals and behinds by behinds, regardless of which team scores them, or whether, instead, there's evidence that goals beget behinds and behinds beget goals - or whether there's no pattern at all to the sequence of scoring. 

The following table was created in the same way as the previous table except this time, rather than looking at whether the Home or the Away team scored, we look at whether the score was a goal or a behind, regardless of which team scored it.

Adopting the same approach as we did with the earlier analysis we find that:

  • for the distribution of p-values on the left, which has as the alternative hypothesis that we've seen too few scoring streaks to be consistent with the null hypothesis of random scoring, the Fisher statistic has a p-value of 99% and the binomial a p-value of 97%. 
  • for the distribution of p-values on the right, which has as the alternative hypothesis whether we've seen too many scoring streaks, the Fisher statistic has a p-value of 37% and the binomial a p-value of 31%.

The Verdict on Scoring Momentum by Score Type

Once again the results are inconclusive and lend only very weak, if any, support to the hypothesis that scoring is a fraction too streaky - that is, that goals tend to be followed by behinds, and behinds by goals, rather than goals begetting goals and behinds more behinds.

But here too the effect size is telling. The average difference is the observed number of scoring streaks is -0.5 streaks per game, set against an average number of scoring streaks in a game of 26.1. If there is an effect, it's far too small to notice and far too small to matter.

Fooled By Lumpiness

In a typical AFL game in 2012 the winning team registered about 30 scoring shots and the losing team about 20. On the assumption that the sequence of team scoring shots is random - so that, for example, the winning team's probability of registering the next scoring shot is always 60%, regardless of whether or not it was the team to score last - how likely is it, do you think, that we'd witness a run of 5 or more consecutive scoring shots by the winning team is such a game?
Read More

Evidence for Intra-Game Momentum in AFL Games

So often in the commentary for an AFL game we hear it said that one team or the other "has the momentum going into the break". This blog sets out to examine this claim - how we might interpret it quantitatively and, given that interpretation, whether or not it's true.
Read More

Lead Changes as a Measure of Game Competitiveness

The final victory margin is one measure of how close a contest was, but it can sometimes mislead when the team that's in front midway through the final term piles on a slew of late goals against a progressively more demoralised opponent, improving its percentage in so doing, but also erasing any trace of the fact that the game might have been a close-run thing throughout the first three-and-a-half or more quarters.
Read More

Characterising AFL Seasons

I can think of a number of ways that an AFL season might be characterised but for today's blog I'm going to call on a modelling approach that I used back in 2010, which is based on Brownian motion and which was inspired by a JASA paper from Hal S Stern.
Read More

Does An Extra Day's Rest Matter in the Finals?

This week Collingwood faces Sydney having played its Semi-Final only 6 day previously while Adelaide take on Hawthorn a more luxurious 8 days after their Semi-Final encounter. The gap for Sydney has been 13 days while that for the Hawks has been 15 days. In this blog we'll assess what, if any, effect these differential gaps between games for competing finalists might have on game outcome.
Read More