One of the challenges in running cold simulations (ie those where the underlying team ratings and Venue Performance Values don’t change within a simulation replicate based on simulated results) is capturing the inherently increasing uncertainty about team ratings in future games.
We could ignore it entirely or, instead, attempt to incorporate the time-varying nature of that uncertainty in some way. I have chosen to follow the latter course in both my home and away simulations and my finals simulations. (For details about the methodology, see this blog post.)
Specifically, I’ve assumed that the standard deviation of teams’ offensive and defensive ratings is equal to 4.5 times the square root of the time between their latest rating and the date of the match in question, measured in days. This results in some quite large standard deviations for moderately distant games.
Applying that methodology to 10,000 of the 50,000 home and away season simulation replicates yields the following chart showing teams’ Finals fate overall and as a function of their ultimate ladder position at the end of the home and away season.
Read More