The Chase UK: Progressively Estimating a Team's Chances of Winning (I'm Afraid You're in Seat 3)

We can construct fairly simple models to estimate the dynamic probability of a given team winning an episode. These models progressively re-estimate the quality and number of contestants who have returned home at any point during the contest, and appear to be better at identifying teams likely to lose rather than teams likely to win.

The fate of the contestant in Seat 1 seems to be especially important ....
— Nov 2021

For today’s blog I’ll be building similar models to estimate the in-running chances of UK teams, using player-by-player data kindly provided to me by 1QS, who runs a website dedicated to The Chase UK that is filled with a variety of interesting statistics and information related to the show, including comprehensive episode-by-episode and summary information.

THE MODELS

As we did for the Australian data, we’ll build models to estimate a team’s chances at five specific points in the contest:

  • After the contestant in Seat 1 has finished his or her CashBuilder and multiple-choice attempt

  • After the contestant in Seat 2 has finished his or her CashBuilder and multiple-choice attempt

  • After the contestant in Seat 3 has finished his or her CashBuilder and multiple-choice attempt

  • After the contestant in Seat 4 has finished his or her CashBuilder and multiple-choice attempt

  • After the Final Target has been set

THE DATA

We initially have data for 2.037 UK episodes, spanning the period from 29 June 2009 to 10 June 2022 from which we’ll exclude any episode not categorised as “Regular” episodes. This excludes all celebrity shows, and games categorised as “Family” episodes. We’ll also include one episode tagged as a being a Regular episode but in which it appears that all five Chasers were used.

That leaves us with data for 1,882 episodes.

CHOOSING THE VARIABLES

We will, again as in the Australian case, build binary logit models where the target variable is 0 or 1 depending on whether or not the team eventually won their bank, but we will use a slightly simpler set of explanatory variables, namely:

  1. The Chaser name

  2. The Cash Builder amount for each of the contestants, but set to 0 if he or she fails to make it back to the table

  3. The Offer Taken by each of the players (Low, Middle, or High), but set to “None” if he or she fails to make it back to the table

  4. The cumulative number of players who’ve taken the Low Offer and got back to the table at each point

  5. The cumulative number of players who’ve taken the High Offer and got back to the table at each point

  6. The total amount of money in the prize pool after each contestant

  7. The target set

(We do actually have a little more information about each contestant in the UK data then we do in the Australian data, which is the gap between he or she and the Chaser at the end of the multiple-choice stage. In this blog we’ll ignore that information, but we might come back and investigate its usefulness as an additional proxy of contestant quality at some later stage)

For the After Seat 1 model we consider

  • Chaser name

  • Cash Builder for Seat 1 (set to zero if failed to make it back)

  • Offer taken by Seat 1 (set to “none” if failed to make it back)

  • Amount in the prize pool after Seat 1 has completed the multiple-choice portion

For the After Seat 2 model we consider

  • Chaser name

  • Cash Builders for Seats 1 and 2 (set to zero if failed to make it back)

  • Total number of Low offers taken by those through to the Final Chase (ie 0, 1, or 2)

  • Total number of High offers taken by those through to the Final Chase (ie 0, 1, or 2)

  • Amount in prize pool after Seat 2 has completed the multiple choice portion

  • Number of people through to the final (ie 0, 1, or 2)

For the After Seat 3 model we consider

  • The same variables, adding those relevant to Seat 3 and including his/her contribution etc

For the After Seat 4 model we consider

  • The same variables, adding those relevant to Seat 4 and including his/her contribution etc

For the Target Set model we consider

  • The same variables, adding the value of the Target set

The rationale for conditioning the values of some variables on whether or not the contestant got back to the table is because his or her ability to influence the ultimate outcome of the contest is dependent on being involved in the Final Chase. A £16,000 Cash Builder is of no value in assessing a team’s chances if it relates to a contestant who is on the bus going home mid-episode.

FITTING THE MODELS

In building each of the five models we select the best model by conducting an extensive search of all possible models using the glmulti function from the glmulti package (which uses a genetic algorithm to intelligently traverse and evaluate the possible models) with the AIC metric as a means of choosing between models and lessening the chances of overfitting. We exclude interaction terms in every model.

We fit all models to the entire data set, and the results are, as follows:

To understand how to interpret these results, let’s work through the first model.

AFTER SEAT 1 HAS FINISHED
The variables selected by the algorithm tell us that:

  • Unlike in the Australian case, the identity of the Chaser is relevant to the estimate of a team’s chances. More on this in a moment

  • The Cash Builder amount, set to zero if the contestant fails to make it back to the seat, gives this model all it needs to know about the ability of the Seat 1 player. Which offer he or she took to get back to the table is of no significant predictive value in terms of the team’s ultimate chances.

The signs on the coefficients tell us the direction of the relationship between values of the variable and the chances of the team. The fact that the coefficient for the Seat 1 Cash Builder is positive tells us that, the higher that Cash Builder amount, the better the team’s chances. We can think of the Seat 1 Cash Builder as a proxy for the ability of the contestant in Seat 1, and the more questions he or she got correct (assuming he or she got back to the table), the more he or she will be able to contribute to the Final Chase.

Also, the fact that the coefficients on all of the Chasers are positive tells us that teams fare better facing any Chaser when compared to our “reference Chaser”, for which purpose we’ve chosen Anne. The relatively large coefficient for Shaun tells us that teams fare, on average, quite a bit better when facing him.

We can now use the model to make some calculations, firstly assuming that the team is facing Anne as the Chaser:

  • If Seat 1 is eliminated, the team’s chances are now estimated to be 1/(1+exp(-(-1.878))), which is a little over 13%. That’s roughly half what they were before the episode commenced and before they knew which Chaser they were facing, since the average team success rate across all Chasers is about 24%.

  • If Seat 1 gets back to the table, the team’s chances are relatively enhanced. If Seat 1’s Cash Builder was £5,000, the estimated chances rise to just under 23% (1/(1+exp(-1.878 + 5 x 0.131)), and if it was £7,000 they rise to just under 28%. Any Cash Builder of £6,000 or more increases the team’s chances above the pre-episode estimate of 24%, and an exceptionally good £10,000 would lift the chances to 36%. (Note that only 1% of contestants in Seat 1 register Cash Builders as high as £10,000).

If we, instead, assume that the Chaser is Shaun, the calculations become:

  • If Seat 1 is eliminated, we now have 1/(1+exp(-(1.878+0.602))), which is a little under 22%. That’s about 9% points higher than if they were facing Anne.

  • If Seat 1 gets back to the table and his or her Cash Builder was £5,000, the estimated chances rise to about 35%, which is up about 12% points compared to the situation if Anne were the Chaser. If, instead, Seat 1 got home after a Cash Builder of £7,000 the teams’ chances rise to over 41%, which is up by about 13% points compared to Anne. With Shaun as the Chaser, even a Cash Builder as low as £1,000 will see the team with estimated chances above the pre-episode estimate of 24%. An even £10,000 would lift the chances to 51%, an increase of 15% points compared to the figure for Anne.

One other thing to note about this model is that (if we predict a team will “win” when the estimated probability is above 24%, and “lose” otherwise) 83% of the teams it suggests won’t win don’t win. This is the figure in the row labelled NPV, which stands for Negative Predictive Value. Conversely, only 31% of the teams it suggests will win do win. That’s what the PPV (Positive Predictive Value) represents.

So, as we saw in the Australian context, failure is much easier to predict than success at this stage of an episode - indeed, at every stage, as we’ll see.

Also shown are the model’s Specificity (using the 24% threshold), which is the total proportion of losing teams that are correctly classified by the model and is 55% here, and the model’s Sensitivity, which is the total proportion of winning teams that are correctly classified by the model and is 63% here. A better model, like a better COVID test, is one that produces higher Specificity and Sensitivity. The value 1 - Sensitivity is sometimes referred to as the False Positive Rate, and we want that to be low.

Accuracy is the proportion of teams correctly classified, which here is 57%, and % Positive Forecasts is the proportion of the time that the model predicts a team will win if we use the threshold specified, which is 49% here.


AFTER SEAT 2 HAS FINISHED

This model just includes one more variable, the Seat 2 Cash Builder, which serves as a proxy for the ability of the player in Seat 2.

The signs on the non-Chaser model coefficients tell us that, as we would expect, the higher the Seat 1 and Seat 2 Cash Builders, assuming the relevant contestant gets home, the higher are a team’s estimated chances, and the Chaser co-efficients tell us that, once again, teams facing anyone other than Anne can expect to fare better, and best of all if they are facing Shaun.

Here are the estimated team victory chances using this model for a few scenarios, firstly assuming Anne is the Chaser:

Both contestants eliminated: about 9%

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 eliminated: about 16%

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000: about 18%

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000: about 30%

Roughly, speaking, any total contribution from Seats 1 and 2 above £10,000 sees the team’s chances rise above 24%.

If we switch to assuming that Shaun is now the Chaser, we have:

Both contestants eliminated: about 16% (ie 7% points higher)

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 eliminated: about 27% (ie 11% points higher)

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000: about 30% (ie 12% points higher)

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000: about 45% (ie 15% points higher)

Roughly, speaking, any total contribution from Seats 1 and 2 above £5,000 with Shaun as the Chaser sees the team’s chances rise above 24%.


AFTER SEAT 3 HAS FINISHED

This model includes two more variables: the Seat 3 Cash Builder, and the number of finalists who’ve taken the Low Offer. Given that Seats 1 and 2 are far less likely to take the Low Offer than is Seat 3 (more on which, later), if there is a finalist who’s taken the Low Offer, it’s most likely to be the most recent contestant.

The signs on the no-Chaser model coefficients tell us that, as we would expect, the higher the Seat 1, Seat 2, and Seat 3 Cash Builders, assuming the relevant contestant gets home, the higher are a team’s estimated chances, and the Chaser co-efficients tell us that, once again, teams facing anyone other than Anne can expect to fare better, and best of all if they are facing Shaun.

The sign on the number of finalists taking the Low Offer variable tells us that, the more that have done this, the lower are the team’s chances. We can think of this as providing a downward estimate of the finalists’ collective ability relative to what we would have estimated using their Cash Builder amounts alone, in that a typical contestant taking Middle Offer and getting back to the desk with, say, a £7,000 Cash Builder, has an assumed better ability to contribute in Final Chase than a contestant taking the Low Offer and getting back to the desk after the same sized Cash Builder.

The number of possible scenarios to investigate is now vast, but here are the estimated victory probabilities for a few where we assume Anne is the Chaser:

All three contestants eliminated: about 7%

Seat 1 progresses with a Cash Builder of £5,000. Seats 2 and 3 are eliminated: about 13%

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 eliminated: about 16%

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000, but took the Low offer. Seat 3 eliminated: about 15%

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 progresses with a Cash Builder of £4,000: about 34%.

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 progresses with a Cash Builder of £4,000, but took the low offer: about 30%.

Given the 90th percentile for Cash Builders is £8,000 for Seats 1 and 2, and £7,000 for Seats 3 and 4, and assuming everyone takes Middle Offer, the team’s chances can only realistically be elevated above the original 24% if at least contestants get home. Possibilities are:

  • Seats 1 and 2 get back to the desk with a combined Cash Builder of £11,000 or more and at least half of that coming from Seat 1.

  • Seats 1 and 3 get back to the desk with a combined Cash Builder of £12,000 or more and at least £7,000 of that coming from Seat 1.

  • Seats 2 and 3 get back to the desk with a combined Cash Builder of £13,000 or more and at least £5,000 of that coming from Seat 2.

  • All three seats get back to the desk with a combined Cash Builder of £12,000 or more and £8,000 of that coming from Seat 1.

Now, assuming that Shaun is the Chaser:

All three contestants eliminated: about 13% (6% higher)

Seat 1 progresses with a Cash Builder of £5,000. Seats 2 and 3 are eliminated: about 22% (9% higher)

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 eliminated: about 26% (10% higher)

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £7,000, but took the Low offer. Seat 3 eliminated: about 22% (7% higher)

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 progresses with a Cash Builder of £4,000: about 50% (16% higher).

Seat 1 progresses with a Cash Builder of £5,000. Seat 2 progresses with a Cash Builder of £7,000. Seat 3 progresses with a Cash Builder of £4,000, but took the low offer: about 45% (15% higher).


AFTER SEAT 4 HAS FINISHED.

This model includes two more variables: the Seat 4 Cash Builder, and the total amount in the prize pool. It also includes the number of finalists who’ve taken the Low Offer, which now covers the result for Seat 4, too, who is even more likely than Seat 3 to take the Low Offer.

The signs on the no-Chaser model coefficients tell us that, as we would expect, the higher the Seat 1, Seat 2, Seat 3, and Seat 4 Cash Builders, assuming the relevant contestant gets home, the higher are a team’s estimated chances, and the Chaser co-efficients tell us that, once again, teams facing anyone other than Anne can expect to fare better, and best of all if they are facing Shaun.

The sign on the number of finalists taking the Low Offer variable tells us that, the more that have, the lower are the team’s chances (relative to what they would have been had no-one taken the Low Offer), and the sign on the prize pool variable tells us that, the larger the pool, the better a team’s chances, which might be attributed to the motivating effect on the contestants, the demotivating or destablising effect on the Chaser, or both.

A quick check of the data confirms the negative effects on a FInal team’s chances of having a Low Offer taker in their midst relative to other Final teams of the same size.

As we can see from the table at right, the presence of at least one Low Offer taker in a Final team reduces its empirical success rate by between 2% and 8% points, and that maximum figure of 8% pertains to the most common size of team in the Final Chase, which is two.



The number of possible scenarios to investigate is the vastest yet, but here are the estimated probabilities for a few where we assume Anne is the Chaser:

All four contestants eliminated: about 4%

Seat 1 progresses with a Cash Builder of £5,000 and taking the middle offer. Seats 2, 3, and 4 are eliminated: about 8%

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £6,000, but taking the low offer of £1,000. Seats 3 and 4 eliminated: about 6%

Seat 1 progresses with a Cash Builder and Contribution of £5,000. Seat 2 progresses with a Cash Builder and Contribution of £8,000. Seat 3 eliminated. Seat 4 progresses with a Cash Builder of £4,000, but takes the low offer of £1,000: about 22%

While it’s hard to generalise completely because of the number of variables in this model, one thing we can say is that it’s difficult to come up with scenarios that lift a team’s chances above 24% assuming no-one takes the Low Offer unless there is at least £16,000 in the bank. That’s been the case in about 38% of episodes.

Note that this new model is the most accurate of all so far in its predictions both of teams it expects to lose, and of teams it expects to win. That said, using the 24% threshold, it is still wrong over 60% of the time about the teams it forecasts will be successful. We could lift that PPV above 50% by setting a threshold of 0.43 but that would see the Sensitivity fall to about 20% (ie we wouldn’t ‘detect’ 80% of the actual winning teams). In other words, we’d more often be right when we predicted a team to be a winner, but we’d do it so relatively infrequently that we’d miss foir in five of them.

Now, assuming that Shaun is the Chaser:

All four contestants eliminated: about 8% (4% higher)

Seat 1 progresses with a Cash Builder of £5,000 and taking the middle offer. Seats 2, 3, and 4 are eliminated: about 16% (8% higher)

Seat 1 eliminated. Seat 2 progresses with a Cash Builder of £6,000, but taking the low offer of £1,000. Seats 3 and 4 eliminated: about 11% (5% higher)

Seat 1 progresses with a Cash Builder and Contribution of £5,000. Seat 2 progresses with a Cash Builder and Contribution of £8,000. Seat 3 eliminated. Seat 4 progresses with a Cash Builder of £4,000, but takes the low offer of £1,000: about 36% (14% higher)

We can now come up with scenarios that lift a team’s chances above 24% with no-one taking the Low Offer and at least £10,000 in the bank.


AFTER THE FINAL TARGET HAS BEEN SET

We now have as much information as we possibly can, and the algorithm decides that the most important pieces are:

  • The identity of the Chaser

  • The Target that’s been set by the team

  • The size of the prize pool (which serves as a proxy for the team’s collective ability and its likely size)

  • The number of finalists who took the Low Offer (which serves to adjust that proxy)

The signs on the coefficients tell us that a team’s chances increase with both the size of the prize pool and the magnitude of the Target they set, and decrease with the number of finalists who took the Low Offer.

The number of possible scenarios to investigate is large here, too, but here are the estimated probabilities for a few where we assume Anne is the Chaser:

The final prize pool is £12,000, the target set is 14, and none of the finalists took the Low Offer: about 5%

The final prize pool is £12,000, the target set is 17, and none of the finalists took the Low Offer: about 16%

The final prize pool is £12,000, the target set is 20, and none of the finalists took the Low Offer: about 38%

The final prize pool is £25,000, the target set is 17, and none of the finalists took the Low Offer: about 21%

The final prize pool is £25,000, the target set is 17, and one of the finalists took the Low Offer: about 10%

Switching to Shaun as the Chaser yields new estimates as follows:

The final prize pool is £12,000, the target set is 14, and none of the finalists took the Low Offer: about 14% (9% higher)

The final prize pool is £12,000, the target set is 17, and none of the finalists took the Low Offer: about 36% (20% higher)

The final prize pool is £12,000, the target set is 20, and none of the finalists took the Low Offer: about 65% (27% higher)

The final prize pool is £25,000, the target set is 17, and none of the finalists took the Low Offer: about 45% (24% higher)

The final prize pool is £25,000, the target set is 17, and one of the finalists took the Low Offer: about 25% (15% higher)

Note that this final model is by far the most accurate of all in its predictions of teams it expects to win (at the selected threshold), but is still only correct about half the time. It’s right over 90% of the time, though, about teams it expects to lose.

In terms of the mix of positive and negative forecasts, this model records the lowest proportion of positive forecasts. In other words, for the given threshold, it’s least likely to forecast that a team will be successful.

WHAT’S DIFFERENT ABOUT SHAUN WALLACE’S PERFORMANCE?

We’ve seen that, for all of the models, the identity of the Chaser is determined to be statistically significant. In particular, knowing that Shaun is the Chaser makes a considerable difference to many of our estimates of team success.

A look at the raw success rate figures by Chaser reveals that Shaun wins about 69% of his episodes, which puts him almost 7% points lower than the all-Chaser average.

What might be the cause of this?

One thought is that he might tend to face contestants of higher average ability, which would manifest in higher average targets and higher Cash Builder amounts. The information in the table above doesn’t support this hypothesis, however. He has the second-lowest average target of all the Chasers, and the second- or third-lowest average Cash Builder across all four seats.

(Interestingly, Darragh has the lowest average Cash Builder for all four seats - albeit based on a small sample - and Mark has the highest averages for Seats 1, 3 and 4, which hints that the producers might be controlling the average contestant ability that each Chaser faces, and maybe that the average quality of contestant has declined in the most recent season, which is the one in which Darragh joined.

Also, the average Cash Builder amounts for Seats 3 and 4 are substantially lower than for Seats 1 and 2, suggesting that the producers might also be making deliberate decisions about contestant order. It’s easy to imagine they might think it makes for a better spectacle to see the stronger contestants going up first, hopefully banking some money, giving the later, less strong contestants the ability to take the Low Offer, especially if their own Cash Builder was small, or to take the High Offer if the situation favours it, especially for the contestant in Seat 4)

If we look at the columns relating to the average Cash Builder amounts by Seat and the success rates for contestants from each seat who took the Middle Offer, we can gauge the average quality of the contestants that each Chaser faced, and the rate at which the Chaser sent those contestants home..

Consider, then, Seat 1 contestants in episodes where Shaun is the Chaser, and compare them with the Seat 1 contestants for each of the other Chasers. Shaun’s Seat 1 adversaries, on average, record Cash Builders of just over £5,000, which is the second-lowest average of all the Chasers. Those of them that take the Middle Offer get back to the desk 63.9%, of the time, which is also second-lowest. Shaun’s Seat 2 contestants rank 4th on Cash Builder and 5th on success rate, his Seat 3 contestants rank 5th on Cash Builder and 3rd on success rate, and his Seat 4 contestants rank 5th on Cash Builder and 6th on success rate. In summary, he faces, on average, relatively weak contestants that he dispatches at relatively high rates.

So, the issue doesn’t seem to be the number of contestants that he’s letting back to the chair. Indeed he ranks second lowest on average finalists per episode, just a smidgeon ahead of Darragh.

What about the Final Chase? We can see that he doesn’t, on average, face larger targets, but he does allow more pushbacks - almost 0.7 more per episode, compared to the all-Chaser average, which is about a 17% increase. He does have the advantage that the teams he faces are ranked third on pushback conversion, but he still winds up answering, on average, an extra 2.35 questions over and above the original target. That means he hears - and therefore uses up the time for - about 7 questions more than the original target required: 4.6 that he gets wrong and then the 2.4 he now has to answer due to pushbacks. That’s about 1.5 questions more than Anne for whom the equivalent data is 3.6 wrong answers and 1.8 pushbacks.

If we look at Chaser success rate by Target Size, we see that this issue seems to affect Shaun’s statistics for targets of all sizes.

His success rate falls to 50% for targets as low as 17, whereas the same is true only for targets of 20 and above for the other Chasers (ignoring the figure for Darragh and targets of 18, for which the sample size is only 7).

IS THE FATE OF SEAT 1 THAT IMPORTANT IN THE UK?

One of the motivations for this blog was to see if Seat 1 is as important in the UK as it seems to be in Australia.

The first sign that it probably isn’t comes from the models we’ve just built, none of which single out as highly predictive the performance of any one seat.

A quick look at the fate of different team compositions in terms of which Seat occupants are present or absent in the final team tends to confirm this.

Whilst it is true that, of the four possible final team compositions of size three, the variations with Seat 1 in them are the most successful, it’s also true that, of all the final teams of size two, the variations with Seat 1 in them are ranked 1st, 4th, and 5th of six. And, the lone Seat 1 configuration is less successful than the lone Seat 4 and lone Seat 3 configurations.

Seat 3, however, consistent with out earlier analysis, appears in the three lowest-winning triplets and three of the four lowest-winning pairs. As a solo finalist, Seat 3 ranks second, albeit off a fairly low base. Based on that, you might conclude that most Finals team that include Seat 3 were somewhat weaker than those of the same size that did not include Seat 3, but the effect seems to be fairly small (which, again, is why no single seat was called out in the predictive models).

While we’re looking at things from a team composition and size perspective, here’s the performance summary across teams of different sizes. Note that the “lazarus” teams, where none made it back the desk so one was chosen to play on behalf of the whole team, are included here in the row for one person Final teams.

We see that the win rate increases most dramatically as we move from two to three finalists, that the Final Target increases by about two per additional Finalist (one for the extra person and one more question answered correctly), and that the Pushback Conversion rate increases with the number of Finalists, but jumps most in moving from a single Finalist to two Finalists.

Another way to analyse the data is to focus solely on how the team fares when a particular Seat makes or misses the Final, and on how much he or she made in the Cash Builder if he or she did make the Final.

Here we do see that, numerically, teams with a particularly strong player in Seat 1 fare better than an average team in that, when the person in Seat 1 has a Cash Builder of £6,000 or more and makes it back to the table, the team’s chances of winning rise to 33%, which is higher than the empirical win percentage associated with Cash Builder amounts made by any other Seat, be it above or below £6,000. The higher win rate for such teams cannot be attributed to their setting higher average targets or capitalising on a greater proportion of pushbacks, however, so it’s hard to make a case for the relatively superior ability of such Seat 1 contestants in Final Chase contributing to the higher win rate. In any case, the sample sizes are such that the differences between many of the win percentages here are not statistically significant.

There is, in summary, no convincing evidence that the presence or absence of the contestant from any one of the four seats makes much of a difference to the average target a team sets, the proportion of pushback opportunities on which it capitalises and, ultimately, the team’s chances of winning.

What is apparent here again, however, is the relatively poorer Cash Building records of those in Seats 3 and 4, and the relatively lower rates at which players from those seats tend to appear in Final Chase. With this in mind, it’s interesting to look at the different strategies employed for taking Low, Middle, and High Offers by seat..

What we find is that contestants in Seats 3 are much more likely to take the Low Offer, especially if their Cash Builder is under £6,000, and that Contestants from Seat 4 are more likely to take the Low or the High Offer if their Cash Builder is under £6,000.

Further analysis reveals that, when Seat 4 registers a Cash Builder under £6,000 the average bank when he or she takes:

  • Low Offer is £15,652

  • Middle Offer is £12,902

  • High Offer is £6,786

We also find that contestants from Seats 1 and 2 who take the Middle Offer fare better than contestants from Seats 3 and 4 that do the same thing, and that contestants from Seat 3 who take the Middle Offer do relatively poorly regardless of the size of their Cash Builder.

FINAL THOUGHTS

We’ve shown that:

  • it’s possible to build fairly simple models to estimate the dynamic probability of a UK team’s success with about the same level of performance as the models for Australian teams

  • until the Target is set, the most predictive information is the amount of each finalist’s Cash Builder and who the Chaser is

  • after Seat 3’s fate has been determined there is also some predictive value in the number of contestants who’ve got back to the desk after taking the Low Offer

  • after Seat 4’s fate has been determined there is still some predictive value in the number of contestants who’ve got back to the desk after taking the Low Offer, as well as some additional predictive value in knowing the size of the bank

  • once the Target is set, the most important variables are what that Target is, how large the bank is, and how many finalists took the Low Offer and who the Chaser is

  • the identity of the Chaser is more important in the UK than it is in Australia

  • the fate of no single Seat, on average, is clearly associated with a significant increase in decrease in a team’s chances

  • the contestants in Seats 3 and 4, on average, registered smaller Cash Builders and also behave quite differently in terms of their Offer choice