Which Chaser Would You Rather Face?

Many of us have had a lot more time on our hands lately, and I’ve been using some of that watching old episodes of the Australian version of The Chase.

Being of a quantitative mindset, I got to wondering how the prospects of a team’s chances might vary with the Chaser they faced, the target they set for him or her, and the size of the prize fund they’d amassed. So, I Googled around for a dataset and found this amazing Google document from James Spencer via this article of his.

It includes all of the data that I needed (and a heckuva lot more) for over 750 episodes of the show from 2015 right up to this year.

For this blog I’m going to take that data and use it to create a predictive model.

SOME DATA EXPLORATION

But, before we get into the analysis and modelling, note that two of the Chasers - Cheryl Toh and Shaun Wallace - have had relatively little experience on the show so I’ll be leaving their episodes out of the modelling. Also, Melbourne Cup and Oaks Day episodes include only two contestants rather than the usual four, so I’ll be excluding those episodes too. That leaves me with a sample size of 754 episodes, from which we can calculate the following summary statistics.

On raw win percentage, Matt Parkinson comes out with the best record, narrowly ahead of Anne Hegerty and Brydon Coverdale. Mark Labbett has the lowest win rate.

It might be the case, however, that Chasers have faced contestants of different average abilities, and it will be this that we’ll attempt to at least partially control for by creating a predictive model.

As we’d expect, we find that larger targets are harder to defend than smaller ones. Note that the “Actual Win %” column shown here refers to the rate at which the Chasers’ win.

We find that, across the entire history, no target below 12 has ever been enough to secure the prize fund, and no target above 24 has ever been successfully run down by a Chaser.

Interestingly, in between those two extremes, there seems to be an inflexion point at around an initial target of 20, below which a team’s chances are only about even-money or worse, but above which their chances rise to about 70% or better.

Put another way, for an average prize fund of about $31,000, lifting the initial target from 20 to 21 has an expected value of (51% - 29%) x $31,000 or almost $7,000. That’s about the same increase as contestants get, on average, from lifting the initial target from 16 to 18 or 19.

And, speaking of prize money, it turns out that larger prize funds tend to be won by contestants (ie not run down by Chasers) more often than smaller ones.

Chasers typically run down targets associated with prize pools under $10,000 about 94% of the time, but do the same to prize pools of $40,000 or more only about 56% of the time.

More on why this is likely to be the case later.

THE MODEL

For today’s blog I’m going to build a binary logit model that will estimate the probability that a Chaser will run down a target of a given initial size when he or she is protecting a prize fund of some specified amount.

Now we know from the earlier analyses that, overall, Chasers’ chances shrink with the size of the initial target and also with the size of the prize fund, but I’m going to further hypothesise that individual Chasers respond differently to those two dimensions, so my model specification will be:

ln(P(Chaser wins) /(1-P(Chaser wins)) = Constant + Chaser x Initial Target + Chaser x Prize Fund

The fitted version of this model appears below.

A worked example might help explain how this model is used and how the coefficients should be interpreted. Consider the scenario where we’re facing Brydon having set a target of 19 and amassed a prize fund of 35,000. The input we need for the logistic then is:

9.854 (the intercept) - 0.035 (because it’s Brydon as Chaser) - 0.475 x 19 (the target) - 0.011 x 35 (the prize fund, in thousands) + 0.059 x 19 (the adjustment for the target given that it’s Brydon) - 0.026 x 35 (the adjustment for the prize fund given that it’s Brydon) = 0.62.

So, our estimate of Brydon’s chances of running us down in this case is 1/(1+exp(-0.62)), which is about 65%. It’s a shame we didn’t set a target of 20, instead. Then the input would change by -0.475 + 0.059, or -0.416 to 0.204, and the estimated probability for Brydon would drop to 55%.

Note that Anne Hegerty is the reference Chaser in this model, which means that she implicitly has zero coefficients for all the terms involving individual Chasers.

Given that, we see that Anne is most likely to run down an average initial target size when defending an average prize fund. That’s because her coefficient in the first block under the intercept is zero, whilst everyone else’s is negative. Next most-likely is Brydon, followed by Issa, Matt and then Mark.

The block of coefficients of the form Chaser<Name>:InitialTarget reveal that Mark’s chances of running down an initial target respond least to increases in that target. For him, each unit increase in the target changes the input to the logistic by -0.475+0.083, or -0.392. By comparsion, for Matt the change is -0.475+0.075, or -0.4, and for Issa it is -0.475-0.013, or -0.488.

Finally, we see from the block of coefficients of the form Chaser<Name>:PrizeFundInThousands that Mark’s chances of running down an initial target respond least to increases in prize money. For him, each $1,000 increase in prizemoney changes the input to the logistic by -0.011+0.006, or -0.005. Similar logic reveals that Brydon’s chances respond most to changes in prizemoney.

One key implication of these differential responses to the initial target size and to the size of the prize fund is that no single Chaser is likely to be preferred in every scenario.

QUALITY OF THE FIT

Before we get on to looking at that though, we should first, somewhat informally, assess the quality of the model fit. One way of doing this is to compare the model’s fitted probability estimates with actual outcomes.

In this first table we look at the relationship between the initial target and a Chaser’s success rate in running down that target.

We see that all Chasers are, in reality, less likely to chase down larger targets than small ones, and that the fitted model reflects this.

We also see that Matt Parkinson is best at chasing down larger targets, both in actuality and when we use the model.

The model generally slightly understates the Chasers’ abilities to run down larger targets, most notably for Issa Schultz where it underestimates his actual record by 8% points.

Overall, though, the model seems to provide an acceptable fit when measured on this dimension.

In this second and last table we look at the relationship between the final prize fund and a Chaser’s success rate in running down the target required to defend it.

Here we see that, as we saw in the earlier analysis, larger prize funds prove to be harder to protect. This is at least partly because larger funds tend to be built by better players and are therefore associated with higher targets (see chart below).

Anne Hegerty, in reality, is best at defending larger prize pools, though the model slightly underestimates her abilities and places her on roughly a par with Issa and Mark, and behind Matt. With only 30 to 50 episodes included in those estimates, they do, of course, suffer from relatively large sample variation.

Overall, again though, the model seems to provide an acceptable fit when measured on this dimension.

FITTED ESTIMATES FOR DIFFERENT SCENARIOS

Lastly, let’s use the model to answer the following question: given that you’ve set some specific target and amassed some specific prize fund, which Chaser, according to the model, would give you the greatest chance of taking home the prize, and which the least?

The top left block provides the answers for the situation where your prize fund is $10,000. In this case you would prefer Mark Labbett if the initial target is below 20 and Issa Schultz otherwise. You would least prefer Brydon Coverdale, whatever the initial target.

Just below that we have the case where your prize fund is $20,000. There too you would prefer Mark Labbett if the initial target is below 20 and Issa Schultz otherwise, and you would again least prefer Brydon Coverdale regardless of the target size.

With a prize fund of $35,000 you’d again prefer Mark Labbett if the initial target is below 20 and Issa Schultz otherwise, but now you’d least prefer Anne Hegerty for initial targets of 16 or less and Brydon Coverdale for higher targets.

Finally, with a prize fund of $50,000 (an amount which is only amassed in only about 13% of episodes) you’d once more prefer Mark Labbett if the initial target is below 20 and Issa Schultz otherwise, but now you’d least prefer Anne Hegerty for initial targets of 20 or less and Matt Parkinson for higher targets.

It’s interesting to note how much more, according to the model and once we apply it using realistic values, Brydon Coverdale’s win rate responds to the size of the prize fund than do the other Chasers. When the prize fund is $10,000 he is rated a 57% chance to run down an initial target of 22, but when it’s $50,000 he’s rated only a 23% chance. By way of contrast, Matt Parkinson moves from 43% to 27% across those same two scenarios, and Anne Hegerty from 33% to 24%.

More broadly this does suggest that each Chaser’s performance is affected in some way by the size of the prize fund on offer, once we control for the size of the initial target. I think it’s reasonable to extrapolate from that conclusion and infer that the disappointment many of them show on failing to run down a target is a genuine reflection of their motivation levels.

If we ignore the probabilities and just focus on identifying the most and least preferred Chasers for a given target and prize fund scenario, we can create the following maps to answer the question for all feasible scenarios.

SUMMARY AND CONCLUSION

As the well-worn aphorism, usually attributed to George Box, has it: “all models are wrong”. Sometimes appended to that phrase is “… but some are useful”.

The model we’ve built here won’t entirely have controlled for the different contestant abilities that each Chaser has faced, and so won’t provide us with perfect estimates of their underlying relative abilities, but it does provide some interesting insights into how their win rates have, on average and with some noise, responded to the size of the targets they’ve faced and the prize pools they’ve defended.

And, at the end of the day, all of them have overall win rates in the 72% to 79% range, so there’s not really much between them however you slice it.