2008 NFL Regular Season Picks

This is a week late in review, but this thread is for discussion of picks records for the 2008 season.

Beatpaths was 150-105-1 for 2008. My own personal picks, informed by beatpaths but occasionally overruled, helped by a 3-0 final week, were three games better, for 153-102-1. I believe King Kaufman got to 151-104-1 after a stellar 13-3 final week.

Anyone know what the other simple picks records were, like “Always pick the home team”?

11 Responses to 2008 NFL Regular Season Picks

  1. According to my calculations, home teams were 146-109-1 (57.2%) this year.

    The “Isaacson-Tarbell Predictor” (http://sports.espn.go.com/espn/page2/story?page=easterbrook/080212) was 158-97-1 (61.9%).

    D∈T

  2. doktarr says:

    For those who don’t want to read Easterbrook, the “Isaacson-Tarbell Predictor” is: best record wins, home team wins when teams have the same record.

  3. Boga says:

    Slighty better than home team wins, worse than the best record wins (which require no knowledge of the teams or anything). What about from say week 8 and onwards? Hopefully as beatpaths gets information, it might be better than a bland prediction.

  4. The worst weeks for Isaacson-Tarbell were week 4 (5-8) and week 6 (6-8). Those were the only weeks this season it wasn’t at least 0.500.

    ITP was 101-52-1 (65.9%) Weeks 8-17.

    D∈T

  5. G says:

    While we’re at it, does anyone know how Accuscore or any of the other really sophisticated simulators did this year? I’d be interested to hear how beatpaths matches up, not just against humans and other simple systems, but the more complex ones as well.

    Also, thinking of Accuscore, has anyone considered looking at Tom’s confidence numbers and trying to turn them into probabilities. Rather than saying, you know, there’s 60 points of beatpower difference between these teams so I’m pretty sure who’s gonna win, I’d rather see something along the line of “teams with a beatpower 60 points greater than their opponent win 75% of the matchups” or something. Does this sound interesting to anyone else? Straight up wins-losses holds not that much interest to me, since crazy things happen – probabilities, on the other hand, are much more useful.

  6. Mornacale says:

    G, regarding probabilities:

    I believe that is is impossible to truly do such a thing. What you would need is a suitably large sample size of games for a particular beatpower, which probably wouldn’t occur for many years. Then you could determine the true proportion of teams with that beatpower who went on to be victorious.

    It is possible, though, that this could be achieved by the retroactive use of beatpaths–that is, creating them for past NFL seasons.

  7. Tom says:

    @G

    I’m thinking of looking back at the confidence rankings and seeing if there are any broad patterns. From what I could tell, the top half of the confidence rankings usually did about twice as well as the bottom half, but that’s just a rough impression I’ve gotten.

  8. doktarr says:

    Mornacle, it is possible, although the accuracy of it would be questionable. We could just plot the beatpower difference against result for a couple seasons worth of data, and then do some sort of least-squares fit (probably a logistic curve). This would give us an easy way to turn beatpower difference into expected probability.

    Personally, though, as I’ve said before, I’d like to draw a distinction between “path picks” and “ranking picks”. It would be interesting to see if picks where a beatpath already exists are more accurate than others, particularly if we correct for the relative rankings of the two teams.

  9. ThunderThumbs says:

    It might stand to reason that an Isaacson-Tarbell-Beatpaths predictor would be even more accurate. Team with more wins, and if the same record, take the beatpaths pick. Interesting thought, anyway.

    Also, I have seen some improvement in beatpaths picks over the years if I always overrule a pick if the favored team is three or less rankings ahead of the home team.

  10. ThunderThumbs says:

    Looks like Accuscore got 171 wins, winning King’s panel o’ experts.

  11. […] tiebreaker used to calculate the ranks. The new tiebreaker seems to have done better: whereas last year’s record was 150-105-1 (58%), this year’s record is 167-89 (65%). In increasing the predictive power […]

Leave a Reply

Your email address will not be published. Required fields are marked *