Matter of Stats

View Original

2013 : Round 19 - Wagers & Tips

If MAFL had a bucket list, one thing still on it would be a successful wager on a drawn outcome, a feat that it has gone desperately close to achieving on a couple of occasions already this season when games on which the Margin Fund has opted for such a wager have been tied up deep into time-on in the final quarter. Not merely content, it seems, with going close, the Margin Fund has ventured not one but two such wagers for this weekend.

In both of the games where we're waving a flag for the draw - which presumably, and perhaps aptly, would be a white one - the TAB Bookmaker is also alive to the possibility, so Investors have secured only a $41 price for their endeavour. Still, that's high enough to make a significant difference to Fund profitability and unlikely enough to promise an enduring memory should either prediction prove prophetic.

These two outlandish wagers in the SuperMargin markets are accompanied by 10 other, less unlikely fluttters, two of them in the games where we've bets on the draw, each foreshadowing small victory margins for the home teams, and eight more on home team victory margins ranging from 1 to 9 points, to 40 to 49 points. None of them is priced higher than $7.50.

Investors also have two bets in the head-to-head markets, a sizeable bet with significant upside on the Dogs to beat Sydney, which is priced at $8, and a bet that's the smallest the TAB will allow (ie $1) and hence of essentially zero consequence to anyone, on the Roos at $3.55. 

Rounding out the weekend's action are three line bets, which this week revert in size to 2.5% of the Line Fund, reduced from 5% to reflect the typically more-challenging nature of line betting at this time of the season when some teams know with absolute certainty that their season will be finishing in August.

Upside abounds in this set of wagers, no moreso than in the Dogs v Swans matchup, where a Dogs win would dollop over 7c of value onto the Recommended Portfolio pie. Draws in either of Carlton's or Collingwood's games promise almost another 4c, while favourable results for any of GWS, West Coast or the Lions would each generate another 2 or 3c.

No single game has downside exceeding 2c, and only three games - those involving GWS, West Coast and the Lions - have even that much. A round of unrelenting wagering misery would still leave the Recommended Portfolio in profit for the season.

TIPSTERS AND PREDICTORS

The average level of disagreement amongst the Head-to-Head Tipsters is moderately high this week, as it was last week, with most contention surrounding the two games in which Investors have wagers on the draw. In each of these games majority support is behind the favourites, as it is in every other game this weekend except in the Giants v Dees clash where only two Tipsters - aside from BKB who defines what "favourite" means - have found themselves able to trust the team that's unbackable, literally, for this year's Spoon.  

Four games have split the Margin Predictors: the highlighted draw candidates plus the GWS v Melbourne, and the Adelaide v Port Adelaide clashes. The Dogs v Swans game very nearly made it a majority of games for which Margin Predictors had teams on both sides of the fence with the ProPred and the WinPred Predictors eventually opting for only a very narrow Sydney win.

WinPred has been especially contrarian this week, its 11.3% MAD being the highest recorded by any Predictor for a single round so far this season. Its probability prediction is the most extreme in seven of the weekend's nine contests.

The Head-to-Head Probability Predictors are much like the Margin Predictors, with advocates of both teams in the Giants v Dees, Blues v Dockers, Crows v Power, and Pies v Dons games, and with considerable support for the idea that the Dogs will go close to toppling the Swans.

Five teams have been rated by the Line Fund algorithm as better than 60% chances to win on line betting this week, and a sixth, the Roos, have been rated as only slightly less likely than 60% to do the same. That's probably going to make for a strongly positive or strongly negative probability score for the Line Fund algorithm this week.

CONFORMITY VS PREDICTIVE PERFORMANCE

Each week when I present the collected opinions of the various MAFL Tipsters and Predictors I also provide a measure of how different each Tipster or Predictor is from its peers.

For the Head-to-Head Tipsters I use as the deviance metric the probability that a randomly selected Tipster (excluding the Tipster in question) in a randomly selected game would have a tip different from the Tipster being assessed; for the Margin Predictors I use the mean absolute difference (MAD) from the all-Predictor average; and for the Head-to-Head Probability Predictors I use the MAD in percentage point terms from the all-Predictor average.

Since you only see them once a week, it's hard to get a sense of what a large or small deviance is, and of whether the deviance in a particular round for a particular Tipster or Predictor is typical or atypical. So this week I thought I'd bring together these deviance metrics for all Tipsters and Predictors for the entire season.

Here, firstly, is a table for the Head-to-Head Tipsters showing the values of the relevant deviance metric for those Tipsters.

This table shows, for example that the deviance metric for BKB this week was 25%, which is slightly above the all-season average for this metric for BKB of 23%. BKB ranks 10th amongst all Tipsters on this metric.

The colour-coding reflects the level of difference displayed by a Tipster in a given week, the greener the more different, and the redder the less different. It's immediately obvious that HSH has been the Tipster Most Different in most weeks. Conversely, and I'd have to say a bit surprisingly, RYL (Ride Your Luck) has been, on average, the Tipster Least Different. Most weeks about 83% of the other Tipsters have agreed with it.

Conformity has generally been a virtue, at least in MAFL circels, in terms of head-to-head tipping accuracy so far this season, as evidenced by the correlation between Tipster mean difference and Tipster Accuracy, which is -0.71. That high level of correlation is driven to a large degree, however, by the poor tipping performance of the Tipster Most Different, HSH, and is reduced if we consider, instead, the rank correlation between the two measures. That correlation is +0.29 and is lowered, especially, by the contradictory rankings in terms of deviance and tipping accuracy for BKB, Shadow, STM II and WinPred.

Next we'll look at a similar table for the Margin Predictors where, again, the colour-coding denotes deviation from the all-Predictor average.

We see that Bookie_3 and Combo_NN2 are most often most deviant, while Combo_7 and Bookie_9 are frequently least deviant.

The relationship between deviance and predictive performance is here very weak. Looking firstly at mean absolute predictive error (MAPE), we see that the correlation between the Predictors' average deviance and their average MAPE is almost zero (-0.01), and not much different from zero if we switch from using the raw data to using ranks where it's +0.11.

That said, amongst the five best Predictors in terms of MAPE there are four of the five least deviant Predictors. Bookie_LPSO is the exception, having made margin predictions generally quite different from the all-Predictor average but still managing to return the 3rd best MAPE. Two other Predictors are also ranked very differently in terms of mean deviance and mean MAPE: Bookie_3, which is the Predictor Most Deviant but which is 6th overall in terms of MAPE, and Combo_NN2, which is the Predictor Next Most Deviant but which sits 7th on MAPE.

A different performance metric, line betting accuracy, shows a similar lack of correlation with our measure of conformity. Bookie_LPSO and Combo_NN2 are, again, significant contributors to the divergence, though Bookie_9, H2H_Adj_7 and Combo_7 play even larger parts. 

The conclusion seems to be that predictive performance can spring from both the extremes of conformity and of non-conformity. Also, as I've noted before, we can see that performance on one metric (MAPE) can be completely unrelated to performance on another (line betting accuracy).

Lastly, here's the table for Head-to-Head Probability Predictor.

(Since this season there's been no need to adjust H2H's probability assessments, there being no occasion on which its probability assessment of the Home team has been more than 25% above the TAB Bookmaker's, H2H_Adj and H2H_Unadj assessments have been identical all season. Accordingly, I've shown only H2H_Unadj here.)

The directly Bookmaker-derived probability assessments have generally been the least different from the all-Predictor average this season, while the assessments of WinPred have been most different.

In this prediction market, however, conformity has paid off in terms of performance, which I've measured here using the Log Probability Score (LPS).

As well as being the Predictors Most Deviant, ProPred and WinPred have also been the least predictable in their deviance, in some weeks displaying the least deviance and then in others displaying the most, occasionally to an extreme degree such as this week where WinPred's MAD is over 11 percentage points per game.