Data scientist, physicist, and fantasy football champion

Week 11 DEF Evaluation

Last week I tried 5 models: 3 reasonable ones that you’ve seen before and 2 that try to ignore history. None of the models were too crazy, but we’ll see in a second what happens when I try to take hot streaks into account. This week I’m going to compare my accuracy to Defense Wins Championships (Dylan Lerch) and the average of 6 of Yahoo’s experts. I’m using Yahoo rather than FantasyPros for this week because I forgot to write down their ranking before today (Tuesday) and now I can only find Week 12 rankings. I’m not trying to hide anything here, I’m just inept. If anyone knows a way to get FantasyPros’ ECR from a previous week please let me know, otherwise there will be a few weeks in here where I have to compare my models to a more limited number of experts because I can only find their rankings.

Let’s get an overview of my predictions from week 11:

Rank

Model.A

Model.B

Model.C

Model.D

Model.E

Yahoo

DWC

True.Order

True.Score

1

MIN

MIN

KC

MIA

WAS

SEA

KC

PIT

22

2

KC

KC

MIN

SEA

MIN

NE

ARI

MIN

20

3

MIA

WAS

IND

KC

NE

MIA

MIN

DET

19

4

WAS

MIA

DET

MIN

TB

KC

SEA

BUF

9

5

SEA

CAR

PIT

WAS

CAR

ARI

MIA

CAR

8

6

CAR

IND

SEA

PHI

GB

NYG

OAK

JAC

7

7

LA

NO

LA

TB

CIN

PIT

NE

OAK

7

8

IND

LA

PHI

ARI

KC

LA

PIT

NYG

7

9

PHI

BUF

ARI

PIT

NYG

MIN

DEN

SEA

7

10

ARI

PHI

WAS

IND

DET

OAK

DAL

LA

7

11

NO

SEA

NYG

GB

DAL

DET

LA

MIA

7

12

DAL

TEN

MIA

CAR

JAC

BUF

NYG

WAS

6

13

TB

DAL

OAK

BAL

IND

DAL

BUF

NE

6

14

BUF

NE

NE

DAL

BUF

CIN

CIN

IND

6

15

TEN

TB

NO

OAK

OAK

CAR

CAR

TB

5

Model.A

Model.B

Model.C

Model.D

Model.E

Yahoo

DWC

52

46

65

91

63

100

69

Models A and B did well (except for totally missing PIT), but really they all did about the same this week given how heavily tiered it was. Model D and Yahoo both missed MIN in the top 3 so they were just going to return poor accuracies this week. The fact that Model C had DET 4th and they actually placed 3rd (and had PIT 5th, the highest of all the models) is encouraging for that model. That said, KC, IND, PHI, ARI, and WAS all missed the top 10, so that’s going to hurt the accuracy score a lot. Despite the fact that I don’t like it as much as A or B and the fact that the acuracy score was higher I’m declaring this week a partial success for Model C. You’re growing on me, Model C. Keep it up.

On a personal note: I need to stop treating rookie/backup QBs like they’re going to be fantasy defense gold. Just this year we had Dak Prescott, Jacoby Brisett, Trevor Siemian, Carson Wentz, and just this week (which I fell victim to) Jared Goff. Last week I had plenty of options for defenses but I went with MIA thinking that since they were going against a new QB I’d pick up a few more interceptions than I would otherwise have. Goff didn’t exactly pull out all the stops, but then again MIA didn’t really turn it into a big day in fantasy (or at least no bigger than teams typically do agains LA this year). It wouldn’t have really mattered this week given how heavily tiered the scores ended up, but I expected MIA to have a much bigger day.

And while we’re at it, let’s go ahead and maybe not make a selection based on Models D and E next week. I know I just said there’s not a huge difference to be seen, but the scores aren’t great and model D looks especially bad. I might try them again in week 12 (because it’s more work to take them out of the .Rmd file that I write these in), but I’m not expecting much from them. Still, it could be worth it to see if their scores are consistently worse or if it was just a fluky week. I think maybe ignoring history isn’t the best idea.

Was George Santayana talking about fantasy football? I should look that up. And trust Model C a little more.

Week 12 DEF Predictions

Week 11 K Predictions