Richard McElreath, in one of the lectures from his Statistical Rethinking course on YouTube aptly and amusingly notes that (and I'm paraphrasing) models are prone to get excited by exposure to data and one of our jobs as statistical modellers is to ensure that this excitability doesn't lead to problems such as overfitting.Read More
Earlier this year in this blog, I introduced the MoSSBODS Team Rating System, an ELO-style system that provides separate estimates of each team's offensive and defensive abilities, as well as a combined estimate formed from their sum. That blog post describes the main motivations for a MoSSBODS-like approach, which I'll not repeat here.Read More
The 2015 AFL Schedule is imbalanced, as have been all AFL schedules since 1987 when the competition expanded to 14 teams, by which I mean that not every team plays every other team at home and away during the regular season. As many have written, this is not an ideal situation since it distorts the relative opportunities of teams' playing in Finals.
As we'll see in this blog, teams will have distinct preferences for how that imbalance is reflected in their draw.Read More
But, I wondered: how do the two Systems compare in terms of the team ratings they provide and the accuracy with which game outcomes can be modelled using them, and what do any differences suggest about changes in team performance within and across seasons?Read More
In years past, the MAFL Fund, Tipping and Prediction algorithms have undergone significant revision during the off-season, partly in reaction to their poor performances but partly also because of my fascination - some might call it obsession - with the empirical testing of new-to-me analytic and modelling techniques. Whilst that's been enjoyable for me, I imagine that it's made MAFL frustrating and difficult to follow at times.Read More