Last week’s power rankings are
10-5 9-6 so far this week.
The only two teams with beatpaths to every other team in their division: Denver and Arizona.
#31 beat #1, with some awesome direct snap misdirection.
And it’s time to start thinking of tiebreaker and rank strategies.
There are two stages to determining rankings. First is the beatloop resolution strategy. That is pretty stable, although doktarr and moose have written about possible ways to enhance it. The general principle of beatloops is not to imply that the teams in a beatloop are tied – it’s more just that it is the smallest set of data that can be seen as ambiguous/confusing, and thus should be removed. That way we rely on the rest of the graph to imply rankings. I think that trying to divine too much data from a beatloop just introduces too many judgment calls into a graph. We always remove smallest beatloops first, starting with splits, and then recalculate.
We’ve tried some methods to bust beatloops here in the past. One that I was fond of was called the beatfluke method, defined as: If Team A’s loss to Team B was beatlooped away, and Team A also has an entirely different remaining alternate beatpath to Team B, then it contradicts Team A’s loss, and thus the A-beats-C-beats-B part of the beatloop can be restored to the graph.
I found this made the graph more vertical, and also slightly more accurate, but I didn’t like how it would lead to more dramatic shifts in the power rankings each week. It made the graphs vary more from week to week. Perhaps if it were combined with a more stabilizing tiebreaker, it could be used again.
The other two approaches of busting beatloops were doktarr’s “iterative” method, and Moose’s score method. The “iterative” method breaks shared beatloops at their shared link, like if one game is responsible for the existence of several beatloops. It is another effort to try to identify one link of a beatloop (a game outcome) as flukey. I do have trouble justifying that one intuitively, though – I feel like I need another reason to believe that link actually is flukey, other than it just being part of several beatloops. The other is a weighted system having to do with score differentials. I believe this ended up accurate and perhaps superior, although I’m trying to keep the main system here free of extra data like points (as opposed to just wins and losses).
After that, there’s how to determine rankings from the resultant beatpath graph. So far this season, I’ve been breaking ties based off of the rankings of the previous week. But the usual tiebreaker for later in the season is to compare the strength of the teams’ direct beatwins – for instance, if every team in a tied set has at least three beatwins, it averages the strength of the top three beatwins of each of those teams, and picks the top team. Finally, I think Moose came up with a tiebreaker having to do with counting all the links in a resultant beatpath graph. This is somewhat similar to what I used in the first and second year here, which counted number of teams above and below each team, but it yields more information in that it counts every link of every possible path, thereby giving extra weight to stronger paths. I probably have this explanation wrong but Moose will correct me in the comments. This is a good candidate to apply as a tiebreaker to the official rankings this season.