Matter of Stats

View Original

The Responsiveness of Bookmaker Prices To Winning and Losing

In this blog I'm seeking to answer a single question: how are a team's subsequent head-to-head bookmaker prices affected by the returns they've provided to head-to-head wagering on them in recent weeks? More succinctly, how much less can you expect to make wagering on recent winners and how much more on recent losers?

BACKGROUND

What motivated this blog was an e-mail conversation I had with Friend of MatterOfStats, Michael, about the practice of wagering solely on underdogs. Currently in 2014, this strategy is showing a profit (the ROI is about 19.5%), a result it hasn't produced for an entire season since (at least) 2006.

That conversation led me to wonder if subsequent head-to-head prices for teams that had won previously, say as heavy underdogs, were adjusted by more than they "should" be, as punters at the margin were attracted to wagering on the team after witnessing the windfall gains made by other punters who wagered on those earlier games. Call it the wisdom of crowd-followers.

To analyse this phenomenon I decided to construct a statistical model with the target variable the TAB Bookmaker's Implicit Home Team Probability, and with two sets of regressors, one set comprising venue and team rating data to establish what the Home Team Probability would be based solely on these factors, and the second set to explicitly include the teams' recent returns to head-to-head wagering to assess the extent to which these altered - and perhaps distorted - home team probabilities.

THE DATA

As is usual for my analyses addressing TAB Bookmaker probabilities, I've used data spanning the period from the start of season 2006. At the time of writing, the 2014 season is 8 rounds old, which means that I have 1,605 games available to me including all home-and- away season and Finals in that count.

The individual data elements I've used are as follows:

  • Home Team Implicit Probability: this is inferred from the TAB Bookmaker's head-to-head prices using the Risk-Equalising methodology. (I also fitted models using the Overround-Equalising and LPSO methodologies but, for a variety of reasons, eventually favoured the Risk-Equalising models).
  • Own and Opponent MARS Ratings: these are the home team's and away team's MARS Ratings at the time of the contest.
  • Own and Opponent ChiPS Ratings: these are the home team's and away team's ChiPS Ratings at the time of the contest.
  • Interstate Status: this is the Interstate Status of the clash, as defined here.
  • Own Return Last Game: this is the return that would have been achieved by a bettor wagering 1 unit on the home team in its previous game. If the home team won, this return would be its price minus 1; if it drew, half the price minus 1; and if it lost, minus 1. In the first game of the season this variable is set to zero.
  • Own Return Two Games Ago: this is defined in an equivalent manner but for the game before last. This variable is set to zero for the first two games of the season.

The other Own Return variables for Three, Four and Five Games Ago are all defined analogously, and the Opponent Return variables have the same definition as the Own Return variables except that they are defined based on the away team's returns to wagering.

I investigated additional variables of the same kind stretching back as far as 12 weeks, but these extra variables added little but clutter to the models and so were excluded.

THE MODEL

Long-time readers of MatterOfStats will by now have intuited one of my ulterior motives in maintaining this website, which is to allow me the opportunity to learn about, use and then write about statistical modelling techniques. For this blog I've been able to use Beta Regression in R for the first time. It's a technique that's ideally suited to the problem at hand where the target variable - here a probability - is a (0,1) bounded continuous variable.

Two beta regression models were fitted for this blog, a full model, which includes all the regressors listed above, and a smaller model, which excludes all of the Own and Opponent Return variables.

Including the 10 Return variables explains about an additional three-quarters of a percent of the variability in Home Team Probability, a small but statistically significant increase.

Note that all of the Own Return variables have positive coefficients. This implies that the fitted Home Probability will be greater (and hence the home team price will be lower) when that home team has provided positive returns to wagering in previous games, and will be lower when wagering on the home team has provided negative returns. These coefficients are all highly statistically significant, except for the fourth most recent game.

Similarly, all Opponent Return variables have negative coefficients, which implies that the fitted Home Team probability will be lower (and hence the fitted Away Team probability will be higher) if wagering on the away team has provided positive returns in recent games, and will be higher if wagering has provided negative returns.

So, the answer to my original question would seem to be that team's head-to-head prices are depressed by recent returns to wagering on them, over and above what can be accounted for by Venue and Team Ratings data alone.

THE MEANING

Two competing hypothesis could be posited to explain this result:

  1. Punters irrationally wager more on teams that have provided wagering returns in recent games, depressing their prices relative to what they should be based on the team's true abilities
  2. The MARS and ChiPS Ratings are insufficiently responsive to recent results, and the coefficients on the Return variables in the model above measure the extent to which these Rating Systems, combined, fail to correctly adjust teams' true Ratings on the basis of recent results

One way to choose between these hypotheses is to calculate probability scores for the two fitted models. If including the Returns variables tends to produce fitted probabilities with an inferior empirical probability score than you get from the model where these variables are excluded, then we'd tend to favour the first hypotheses over the second.

Calculating the Log Probability and the Brier Scores for the two models yields the following: 

  • Log Probability Score
    • Including Historical Returns: 0.1753
    • Excluding Historical Returns: 0.1770
  • Brier Score
    • Including Historical Returns: 0.1948
    • Excluding Historical Returns: 0.1942

Recalling that higher Log Probability and lower Brier Scores imply superior forecasting leads us to prefer the model without the historical returns data, and lends weight to hypothesis 1.

I don't think this finding is unequivocal - there might, for example, be more direct ways of assessing the relative merits of the two hypotheses - but I do find it persuasive. To ensure that the Ratings I used in the modelling adequately accounted for very and less-recent results, I included both MARS and ChiPS Ratings in the models. In my experience, ChiPS Ratings seem to be more responsive to single game results, whereas MARS Ratings reflect results over a slightly longer time period. The fact that the four Ratings variable coefficients all achieved statistical significance despite the extremely high levels of correlation between the Own MARS and Own ChiPS (+0.952) and the Opponent MARS and Opponent ChiPS (+0.946) variables, suggests that they do provide so independent "signal" for the regression. (Constructing a random forest using the same data and model formulation, then inspecting the variable importances lends further weight to this conclusion.)

In the end, though, separating form from class comes down to a judgement call about how quickly you believe class can change.

At this point I'm inclined to believe that bookmaker prices are, subtly, driven higher or lower than they should be, were they to reflect teams' genuine chances, on the basis of the bookmaker's own or his punters' reactions to recent wagering returns for the teams involved.

So, how big are these effects you might wonder.

Charted below is the empirical CDF for the absolute difference between the fitted probabilities from the two models.

Most of the differences are small. One-half are smaller than 1.2% points and three-quarters are smaller than 2.3%. For only one game in 25 does the difference exceed 5% points, and for only 6 of the 1,605 games does it exceed 10% points. Still, as we know, it doesn't take much of a difference in calibration to turn a loss into a profit.

Another way to assess the magnitude of the differences in the two models is to create an empirical CDF for the absolute difference in home team prices implied by the fitted probabilities from the two models, converted to prices by assuming total overround of 5% and employing the risk-equalising approach.

Again we see that the differences tend to be small. For a little over one-half of the games the difference is less than 5c in the prices, and for three-quarters it's less than 11c. The difference is 25c or more for only about 11% of games, and 50c or more for just 4%.

THE CONCLUSION

If, as seems to be the case, prices are very slightly distorted by team's recent wagering performances, recognising and adjusting for this distortion could prove to be profitable.

Mirroring the concluding remarks of so many scientific papers - and I'm not for a moment suggesting that this blog is of a quality sufficient to be considered anything like one of those - let me end by saying "more research is needed".