2011 - Simulating the Finals - Part III

Here, based on the new MARS Ratings are the updated team-versus-team probabilities that I'll be using in this week's simulations:

Collingwood are still predicted to defeat all-comers, though the likelihood that they'll defeat the Cats has reduced somewhat now that the Ratings Point gap between these two teams has been cut to 7.

Carlton's chances of defeating every other remaining team have risen this week, since it grabbed more Ratings Points than any other during Week 1 of the Finals. Sydney fared next best and so has seen its prospects improve when playing any other team but the Blues. Hawthorn and West Coast, both of which dropped Ratings Points this week, have seen their chances generally decline against the other remaining finalists, although the Hawks still remain favourites over the Eagles, Blues and Swans.

The results of the new simulations appear below, again with the newest data on the left and the outputs from last week's simulations on the right.

Based on these simulations, the Pies chances of winning the flag have barely moved as a result of their win this week and remain at just over 50%. Geelong, however, boosted its chances by just over 5% points, the same amount by which it caused the Hawks' chances to fall.

Carlton's and Sydney's chances rose by just a little, while the Eagle's chances, already slim, all but vanished on Saturday according to the simulation results.

(One additional, interesting element of the simulations is that while the Pies' Flag chances are virtually unchanged, they are now rated more likely to make the Grand Final (up from about 74% to 82%) but more likely to lose it if they do (up from about 30% to 36%)).

Based on these simulated team Premiership probabilities, only the Cats offer any value on the TAB and, at $2.75, it's very little value at that.

In the lower portion of the simulation results I've provided the simulation-based probabilities for each of the remaining possible GF matchups. A Collingwood v Geelong Granny has now firmed to be about a 2/1 on prospect, while the matchups of Geelong v Hawthorn and Collingwood v Carlton Grand Finals are both rated about 7/1 shots and on the second line of betting.

At present the GF quinella market is suspended on the TAB, so I can't say which if any of these pairings offer value. 

2011 - Simulating the Finals - Part I

This week's results won't have any bearing at all on which teams play in the Finals and has only an outside chance even of altering the ordering of the finalists and, in so doing, altering the venues at which games will take place in the first week of the finals. So, rather than simulating the results of what will largely be an inconsequential round, I've instead decided to simulate the finals series itself.
Read More

Simulating the Head-to-Head MAFL Fund Algorithm

Over the past few months in this journal we've been exploring the results of simulations produced using the five parameter model I first described in this blog post. In all of these posts the punter that we've been simulating has generated her home team probability assessments independently of the bookmaker; statistically speaking, her home team probability assessments are uncorrelated with those of the bookmaker she faces.
Read More

Probability Score as a Predictor of Profitability : A More General Approach

We've spent some time now working with the five parameter model, using it to investigate what We've spent some time now working with the five parameter model, using it to investigate what various wagering environments mean for the relative and absolute levels of profitability to Kelly-staking and Level-staking. The course we followed in the simulations for the earliest blogs was to hold some of the five parameters constant and to vary the remainder. We then used the simulation outputs to build rules of thumb about the profitability of Kelly-staking and of Level-staking. These rules of thumb were described in terms of the values of the parameters that we varied, which made them practically useful only if we felt we could estimate quantities such as the Bookie's and the Punter's bias and variability. The exact values of these parameters cannot be inferred from an actual set of bookmaker prices, wagers and results because they depend on knowledge of the true Home team probability in every game. More recent blogs have provided rules based on probability scores, which are directly related to the underlying value of the bias and variability of the bookie or punter that produced them, but which have the decided advantage of being directly measurable.
Read More

Probability Score Thresholds: Reality Intrudes

If you've been following the series of posts here on the five-parameter model, in particular the most recent one, and you've been tracking the probability scoring performance of the Head-to-Head Fund over on the Wagers & Tips blog, you'll be wondering why the Fund's not riotously profitable at the moment. After all, its probability score per game is almost 0.2, well above the 0.075 that I estimated was required for Kelly-Staking to be profitable. So, has the Fund just been unlucky, or is there another component to the analysis that explains this apparent anomaly?
Read More

Probability Score as a Predictor of Profitability: Part 2

In the previous blog I came up with some rules of thumb (rule of thumbs?) for determining what probability score was necessary to be profitable when following a Kelly-staking or a Level-staking approach, and what probability score was necessary to favour one form of staking over the other.

Briefly, we found that, when the overround is 106%, Bookie Bias is -1%, Bookie Sigma is 5%, and when the distribution of Home team probabilities broadly mirrors the historical distribution from 1999 to the present, then:

  1. If the Probability Score is less than 0.035 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.014 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.072 per game then Level-staking is superior to Kelly-staking

Taken together these rules suggest that, when facing a bookie of the type described, a punter should avoid betting if her probability scoring is under 0.014 per game, Level-stake if it's between 0.014 and 0.072, and Kelly-stake otherwise.

For this blog we'll determine how these rules would change if the punter was faced with a slightly more talented and greedier bookmaker, specifically, one with an overround of 107.5%, a bias of 0% and a sigma of 5%.

In this wagering environment the rules become:

  1. If the Probability Score is less than 0.075 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.080 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.074 per game then Level-staking is superior to Kelly-staking (but is generally unprofitable)

Taken together these rules suggest that, when facing a bookie of the type now described, a punter should avoid betting if her probability scoring is under 0.075 per game and Kelly-stake otherwise. Level-staking is never preferred in this wagering environment because Level-staking is more profitable than Kelly-staking only for the range of probability scores for which neither Level-staking nor Kelly-staking tends to be profitable.

Essentially, the increase in the talent and greed of the bookmaker has eliminated the range of probability scores for which Level-staking is superior and increased the minimum required probability score to make Kelly-staking profitable from 0.072 to 0.075 per game.

Probability Score as a Predictor of Profitability

For the current blog the questions we'll be exploring are: whether a Predictor's probability score has any relevance to its ability to produce a profit; the relationship between a Predictor's probability score and the bias and variability of its probability assessments; for a Predictor that produces probability assessments that generate a given probability score, whether Kelly-staking or Level-staking is more profitable
Read More

Estimating Bookie Bias and Variability in Home Team Probability Assessments

This blog is another in the series of blogs about simulating the contest between bookmaker and punter (for details see the 1st blog, 2nd blog, 3rd blog, and 4th blog). In these blogs we've estimated the relative importance of the bias and variability in a bookmaker's home team probability assessments relative to the bias and variability in the punter's assessments.
Read More

To Bet or Not to Bet?

In an earlier blog, using the 5 parameter model first discussed here, I summarised the results of simulating 100 seasons played out under each of 1,000 different parameter sets in a pair of rules that described when Kelly-staking tends to be superior to Level-staking, and vice versa. Implicitly, that blog assumed that we were going to wager, so our concern was solely with selecting the better wagering approach to adopt. But there is, of course, a third option that dare not speak its name and that is not to bet at all. In this blog I'll extend the previous analysis and derive rules for when we should Kelly-Stake, when we should Level-Stake and when we should just upstakes and leave.
Read More