2012 - Simulations After Round 21

Compared to the simulations at the end of Round 20, the latest simulations see: 

  • Adelaide's minor premiership chances plummet from 49% to under 5%. It's conceivable now, though barely, that they could finish at low as 6th.
  • Carlton's chances of making the finals rise from 13% to 35%
  • Collingwood's Top 4 chances drop from 94% to 80%, and its minor premiership chances virtually extinguished after having been assessed at about 11% last week
  • Essendon's chances of playing finals football dive from 26% to just over 1%
  • Fremantle's chances of competing in the finals rising from 43% to 62%
  • Geelong's Top 8 chances rise from 95% to 99%
  • The Gold Coast virtually handing the Wooden Spoon to GWS, the Suns' Spoon chances now rated at only just over 1%
  • GWS preparing its Spoon Acceptance speech
  • Hawthorn lifting its minor premiership chances from 16% to 49%. (Curiously, the Hawks are now more likely to finish 1st, 3rd or 4th than they are to finish 2nd.)
  • The Roos' Top 4 chances inch up from about 1% to just under 4%, and their chances of a Top 8 spot reach 100%
  • Richmond's Top 8 chances disappear (they were assessed at just over 2% last week)
  • St Kilda's Top 8 chances drop from 26% to under 3%
  • Sydney's minor premiership chances climb from 24% to 47%
  • West Coast's Top 4 chances rise from about 7% to 19% and their Top 8 chances reach 100%

 

To help you assess the validity of these latest simulations for yourself, here are the simulated probabilities for the results of each of the remaining 18 games in the home-and-away season, upon which the simulated ladder positions discussed above are based.

Turning to the TAB AFL Futures Markets and using the results of these latest simulations, only two wagers offer an edge of at least 5%: 

- Hawthorn for the minor premiership at $2.20 (estimated 7% edge)

- The Roos for a Top 4 finish at $34 (estimated 29% edge)

 

 

2012 - Simulations After Round 20

Here are the results of the new simulations, run using the updated competition ladder and the new MARS Ratings.

(The new results are in grey on the left, while those from last week are provided for comparative purposes and appear in green on the right.)

On a team-by-team basis the major changes are: 

  • Adelaide: now the favorites for the minor premiership, finishing top in almost 50% of simulations
  • Brisbane Lions: virtual certainties to finish somewhere within ladder positions 13 to 17
  • Carlton: increased their chances of making the 8 from about 7% to almost 13%
  • Collingwood: more than doubled their chances of winning the minor premiership from about 5% to 11%, and also boosted their chances of finishing in the Top 4 from 76% to 94%
  • Essendon: almost halved their chances of making the 8 from 47% to 26%
  • Fremantle: decreased their chances of making the 8 from 59% to 43%
  • Geelong: virtually eliminated their chances of a Top 4 finish, but left their finals chances only very slightly undiminished
  • Gold Coast: almost halved their Spoon chances from 26% to 15%
  • GWS: increased their Spoon chances from 74% to 85%
  • Hawthorn: saw their chances of finishing as minor premiers drop from 19% to 16%, but their chances of a Top 4 finish rise from 98% to 99%
  • Kangaroos: saw their chances of finishing in the Top 4 approximately halve from 2% to about 1%, but their chances of a Top 8 finish rise from 76% to 96%
  • Melbourne: did nothing to alter the inevitability of a finish somewhere from 13th to 17th 
  • Port Adelaide: also did nothing to alter the inevitability of a finish somewhere from 13th to 17th
  • Richmond: blew gently on their flickering chances of a Top 8 finish, nudging it from under 1% to just over 2%
  • St Kilda: lifted their finals chances from 21% to 26%
  • Sydney: more then halved their chances of taking out the minor premiership from 54% to 24%, and opened the probabilistic door, albeit only a hair's width, for a finish outside the Top 4
  • West Coast: saw their chances of a Top 4 finish slip from 10% to under 7%, but their chances of a spot in the finals rise from 93% to 99%
  • Western Bulldogs: did nothing to alter the inevitability of a finish somewhere from 13th to 17th

Marrying these new simulation results to the current TAB AFL Futures Markets we find value in: 

  • Hawthorn at $7 for the minor premiership (12% edge)
  • Carlton at $9 (13% edge) and Richmond at $51 (12% edge) for spots in the 8
  • Fremantle at $1.90 (8% edge) and St Kilda at $1.45 (8% edge) to miss the 8

The value we spotted last week in the prices for Geelong to make the 8, Collingwood and Geelong to make the Top 4, and in GWS to win the Spoon has now disappeared, leaving a wager on St Kilda to miss the 8 as the only identified "value bet" that still carries that label.

 

2011 - Simulating the Finals - Part III

Here, based on the new MARS Ratings are the updated team-versus-team probabilities that I'll be using in this week's simulations:

Collingwood are still predicted to defeat all-comers, though the likelihood that they'll defeat the Cats has reduced somewhat now that the Ratings Point gap between these two teams has been cut to 7.

Carlton's chances of defeating every other remaining team have risen this week, since it grabbed more Ratings Points than any other during Week 1 of the Finals. Sydney fared next best and so has seen its prospects improve when playing any other team but the Blues. Hawthorn and West Coast, both of which dropped Ratings Points this week, have seen their chances generally decline against the other remaining finalists, although the Hawks still remain favourites over the Eagles, Blues and Swans.

The results of the new simulations appear below, again with the newest data on the left and the outputs from last week's simulations on the right.

Based on these simulations, the Pies chances of winning the flag have barely moved as a result of their win this week and remain at just over 50%. Geelong, however, boosted its chances by just over 5% points, the same amount by which it caused the Hawks' chances to fall.

Carlton's and Sydney's chances rose by just a little, while the Eagle's chances, already slim, all but vanished on Saturday according to the simulation results.

(One additional, interesting element of the simulations is that while the Pies' Flag chances are virtually unchanged, they are now rated more likely to make the Grand Final (up from about 74% to 82%) but more likely to lose it if they do (up from about 30% to 36%)).

Based on these simulated team Premiership probabilities, only the Cats offer any value on the TAB and, at $2.75, it's very little value at that.

In the lower portion of the simulation results I've provided the simulation-based probabilities for each of the remaining possible GF matchups. A Collingwood v Geelong Granny has now firmed to be about a 2/1 on prospect, while the matchups of Geelong v Hawthorn and Collingwood v Carlton Grand Finals are both rated about 7/1 shots and on the second line of betting.

At present the GF quinella market is suspended on the TAB, so I can't say which if any of these pairings offer value. 

2011 - Simulating the Finals - Part I

This week's results won't have any bearing at all on which teams play in the Finals and has only an outside chance even of altering the ordering of the finalists and, in so doing, altering the venues at which games will take place in the first week of the finals. So, rather than simulating the results of what will largely be an inconsequential round, I've instead decided to simulate the finals series itself.
Read More

Simulating the Head-to-Head MAFL Fund Algorithm

Over the past few months in this journal we've been exploring the results of simulations produced using the five parameter model I first described in this blog post. In all of these posts the punter that we've been simulating has generated her home team probability assessments independently of the bookmaker; statistically speaking, her home team probability assessments are uncorrelated with those of the bookmaker she faces.
Read More

Probability Score as a Predictor of Profitability : A More General Approach

We've spent some time now working with the five parameter model, using it to investigate what We've spent some time now working with the five parameter model, using it to investigate what various wagering environments mean for the relative and absolute levels of profitability to Kelly-staking and Level-staking. The course we followed in the simulations for the earliest blogs was to hold some of the five parameters constant and to vary the remainder. We then used the simulation outputs to build rules of thumb about the profitability of Kelly-staking and of Level-staking. These rules of thumb were described in terms of the values of the parameters that we varied, which made them practically useful only if we felt we could estimate quantities such as the Bookie's and the Punter's bias and variability. The exact values of these parameters cannot be inferred from an actual set of bookmaker prices, wagers and results because they depend on knowledge of the true Home team probability in every game. More recent blogs have provided rules based on probability scores, which are directly related to the underlying value of the bias and variability of the bookie or punter that produced them, but which have the decided advantage of being directly measurable.
Read More

Probability Score Thresholds: Reality Intrudes

If you've been following the series of posts here on the five-parameter model, in particular the most recent one, and you've been tracking the probability scoring performance of the Head-to-Head Fund over on the Wagers & Tips blog, you'll be wondering why the Fund's not riotously profitable at the moment. After all, its probability score per game is almost 0.2, well above the 0.075 that I estimated was required for Kelly-Staking to be profitable. So, has the Fund just been unlucky, or is there another component to the analysis that explains this apparent anomaly?
Read More

Probability Score as a Predictor of Profitability: Part 2

In the previous blog I came up with some rules of thumb (rule of thumbs?) for determining what probability score was necessary to be profitable when following a Kelly-staking or a Level-staking approach, and what probability score was necessary to favour one form of staking over the other.

Briefly, we found that, when the overround is 106%, Bookie Bias is -1%, Bookie Sigma is 5%, and when the distribution of Home team probabilities broadly mirrors the historical distribution from 1999 to the present, then:

  1. If the Probability Score is less than 0.035 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.014 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.072 per game then Level-staking is superior to Kelly-staking

Taken together these rules suggest that, when facing a bookie of the type described, a punter should avoid betting if her probability scoring is under 0.014 per game, Level-stake if it's between 0.014 and 0.072, and Kelly-stake otherwise.

For this blog we'll determine how these rules would change if the punter was faced with a slightly more talented and greedier bookmaker, specifically, one with an overround of 107.5%, a bias of 0% and a sigma of 5%.

In this wagering environment the rules become:

  1. If the Probability Score is less than 0.075 per game then Kelly-staking will tend to be unprofitable 
  2. If the Probability Score is less than 0.080 per game then Level-staking will be unprofitable 
  3. If the Probability Score is less than 0.074 per game then Level-staking is superior to Kelly-staking (but is generally unprofitable)

Taken together these rules suggest that, when facing a bookie of the type now described, a punter should avoid betting if her probability scoring is under 0.075 per game and Kelly-stake otherwise. Level-staking is never preferred in this wagering environment because Level-staking is more profitable than Kelly-staking only for the range of probability scores for which neither Level-staking nor Kelly-staking tends to be profitable.

Essentially, the increase in the talent and greed of the bookmaker has eliminated the range of probability scores for which Level-staking is superior and increased the minimum required probability score to make Kelly-staking profitable from 0.072 to 0.075 per game.

Probability Score as a Predictor of Profitability

For the current blog the questions we'll be exploring are: whether a Predictor's probability score has any relevance to its ability to produce a profit; the relationship between a Predictor's probability score and the bias and variability of its probability assessments; for a Predictor that produces probability assessments that generate a given probability score, whether Kelly-staking or Level-staking is more profitable
Read More